Sample records for risk model based

  1. LIFETIME LUNG CANCER RISKS ASSOCIATED WITH INDOOR RADON EXPOSURE BASED ON VARIOUS RADON RISK MODELS FOR CANADIAN POPULATION.

    PubMed

    Chen, Jing

    2017-04-01

    This study calculates and compares the lifetime lung cancer risks associated with indoor radon exposure based on well-known risk models in the literature; two risk models are from joint studies among miners and the other three models were developed from pooling studies on residential radon exposure from China, Europe and North America respectively. The aim of this article is to make clear that the various models are mathematical descriptions of epidemiologically observed real risks in different environmental settings. The risk from exposure to indoor radon is real and it is normal that variations could exist among different risk models even when they were applied to the same dataset. The results show that lifetime risk estimates vary significantly between the various risk models considered here: the model based on the European residential data provides the lowest risk estimates, while models based on the European miners and Chinese residential pooling with complete dosimetry give the highest values. The lifetime risk estimates based on the EPA/BEIR-VI model lie within this range and agree reasonably well with the averages of risk estimates from the five risk models considered in this study. © Crown copyright 2016.

  2. Fire risk in San Diego County, California: A weighted Bayesian model approach

    USGS Publications Warehouse

    Kolden, Crystal A.; Weigel, Timothy J.

    2007-01-01

    Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.

  3. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    PubMed

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.

  4. Study of a risk-based piping inspection guideline system.

    PubMed

    Tien, Shiaw-Wen; Hwang, Wen-Tsung; Tsai, Chih-Hung

    2007-02-01

    A risk-based inspection system and a piping inspection guideline model were developed in this study. The research procedure consists of two parts--the building of a risk-based inspection model for piping and the construction of a risk-based piping inspection guideline model. Field visits at the plant were conducted to develop the risk-based inspection and strategic analysis system. A knowledge-based model had been built in accordance with international standards and local government regulations, and the rational unified process was applied for reducing the discrepancy in the development of the models. The models had been designed to analyze damage factors, damage models, and potential damage positions of piping in the petrochemical plants. The purpose of this study was to provide inspection-related personnel with the optimal planning tools for piping inspections, hence, to enable effective predictions of potential piping risks and to enhance the better degree of safety in plant operations that the petrochemical industries can be expected to achieve. A risk analysis was conducted on the piping system of a petrochemical plant. The outcome indicated that most of the risks resulted from a small number of pipelines.

  5. WORKSHOP ON APPLICATION OF STATISTICAL METHODS TO BIOLOGICALLY-BASED PHARMACOKINETIC MODELING FOR RISK ASSESSMENT

    EPA Science Inventory

    Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...

  6. The more from East-Asian, the better: risk prediction of colorectal cancer risk by GWAS-identified SNPs among Japanese.

    PubMed

    Abe, Makiko; Ito, Hidemi; Oze, Isao; Nomura, Masatoshi; Ogawa, Yoshihiro; Matsuo, Keitaro

    2017-12-01

    Little is known about the difference of genetic predisposition for CRC between ethnicities; however, many genetic traits common to colorectal cancer have been identified. This study investigated whether more SNPs identified in GWAS in East Asian population could improve the risk prediction of Japanese and explored possible application of genetic risk groups as an instrument of the risk communication. 558 Patients histologically verified colorectal cancer and 1116 first-visit outpatients were included for derivation study, and 547 cases and 547 controls were for replication study. Among each population, we evaluated prediction models for the risk of CRC that combined the genetic risk group based on SNPs from GWASs in European-population and a similarly developed model adding SNPs from GWASs in East Asian-population. We examined whether adding East Asian-specific SNPs would improve the discrimination. Six SNPs (rs6983267, rs4779584, rs4444235, rs9929218, rs10936599, rs16969681) from 23 SNPs by European-based GWAS and five SNPs (rs704017, rs11196172, rs10774214, rs647161, rs2423279) among ten SNPs by Asian-based GWAS were selected in CRC risk prediction model. Compared with a 6-SNP-based model, an 11-SNP model including Asian GWAS-SNPs showed improved discrimination capacity in Receiver operator characteristic analysis. A model with 11 SNPs resulted in statistically significant improvement in both derivation (P = 0.0039) and replication studies (P = 0.0018) compared with six SNP model. We estimated cumulative risk of CRC by using genetic risk group based on 11 SNPs and found that the cumulative risk at age 80 is approximately 13% in the high-risk group while 6% in the low-risk group. We constructed a more efficient CRC risk prediction model with 11 SNPs including newly identified East Asian-based GWAS SNPs (rs704017, rs11196172, rs10774214, rs647161, rs2423279). Risk grouping based on 11 SNPs depicted lifetime difference of CRC risk. This might be useful for effective individualized prevention for East Asian.

  7. Anthropometric measures in cardiovascular disease prediction: comparison of laboratory-based versus non-laboratory-based model.

    PubMed

    Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam

    2015-03-01

    Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  9. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. EPA is releasing a final report describing the evaluation and applications of physiologically based pharmacokinetic (PBPK) models in health risk assessment. This was announced in the September 22 2006 Federal Register Notice.

  10. Modeling Research Project Risks with Fuzzy Maps

    ERIC Educational Resources Information Center

    Bodea, Constanta Nicoleta; Dascalu, Mariana Iuliana

    2009-01-01

    The authors propose a risks evaluation model for research projects. The model is based on fuzzy inference. The knowledge base for fuzzy process is built with a causal and cognitive map of risks. The map was especially developed for research projects, taken into account their typical lifecycle. The model was applied to an e-testing research…

  11. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  12. Development and external validation of a risk-prediction model to predict 5-year overall survival in advanced larynx cancer.

    PubMed

    Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M

    2018-05-01

    TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. FRAMEWORK FOR EVALUATION OF PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODELS FOR USE IN SAFETY OR RISK ASSESSMENT

    EPA Science Inventory

    ABSTRACT

    Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic (PBPK) models, raise the issue of how to evaluate whether the models are adequate for proposed uses including safety or risk ...

  14. Relative risk for HIV in India - An estimate using conditional auto-regressive models with Bayesian approach.

    PubMed

    Kandhasamy, Chandrasekaran; Ghosh, Kaushik

    2017-02-01

    Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  16. Application of discriminant analysis-based model for prediction of risk of low back disorders due to workplace design in industrial jobs.

    PubMed

    Ganga, G M D; Esposto, K F; Braatz, D

    2012-01-01

    The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.

  17. A risk management model for familial breast cancer: A new application using Fuzzy Cognitive Map method.

    PubMed

    Papageorgiou, Elpiniki I; Jayashree Subramanian; Karmegam, Akila; Papandrianos, Nikolaos

    2015-11-01

    Breast cancer is the most deadly disease affecting women and thus it is natural for women aged 40-49 years (who have a family history of breast cancer or other related cancers) to assess their personal risk for developing familial breast cancer (FBC). Besides, as each individual woman possesses different levels of risk of developing breast cancer depending on their family history, genetic predispositions and personal medical history, individualized care setting mechanism needs to be identified so that appropriate risk assessment, counseling, screening, and prevention options can be determined by the health care professionals. The presented work aims at developing a soft computing based medical decision support system using Fuzzy Cognitive Map (FCM) that assists health care professionals in deciding the individualized care setting mechanisms based on the FBC risk level of the given women. The FCM based FBC risk management system uses NHL to learn causal weights from 40 patient records and achieves a 95% diagnostic accuracy. The results obtained from the proposed model are in concurrence with the comprehensive risk evaluation tool based on Tyrer-Cuzick model for 38/40 patient cases (95%). Besides, the proposed model identifies high risk women by calculating higher accuracy of prediction than the standard Gail and NSAPB models. The testing accuracy of the proposed model using 10-fold cross validation technique outperforms other standard machine learning based inference engines as well as previous FCM-based risk prediction methods for BC. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Proceedings of the Conference on Toxicology: Applications of Advances in Toxicology to Risk Assessment. Held at Wright-Patterson AFB, Ohio on 19-21 May 1992

    DTIC Science & Technology

    1993-01-01

    animals in toxicology research, the application of pharmacokinetics and physiologically based pharmacokinetic mdels in chemical risk assessment, selected...metaplasia Neurotoxicity Nonmutagenic carcinogens Ozone P450 PBPK modeling Perfluorohexane Peroxisome proliferators Pharmacokinetics Pharmacokinetic models...Physiological modeling Physiologically based pharmacokinetic modeling Polycyclic organic matter Quantitative risk assessment RAIRM model Rats

  19. Adaptation of a Biomarker-Based Sepsis Mortality Risk Stratification Tool for Pediatric Acute Respiratory Distress Syndrome.

    PubMed

    Yehya, Nadir; Wong, Hector R

    2018-01-01

    The original Pediatric Sepsis Biomarker Risk Model and revised (Pediatric Sepsis Biomarker Risk Model-II) biomarker-based risk prediction models have demonstrated utility for estimating baseline 28-day mortality risk in pediatric sepsis. Given the paucity of prediction tools in pediatric acute respiratory distress syndrome, and given the overlapping pathophysiology between sepsis and acute respiratory distress syndrome, we tested the utility of Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II for mortality prediction in a cohort of pediatric acute respiratory distress syndrome, with an a priori plan to revise the model if these existing models performed poorly. Prospective observational cohort study. University affiliated PICU. Mechanically ventilated children with acute respiratory distress syndrome. Blood collection within 24 hours of acute respiratory distress syndrome onset and biomarker measurements. In 152 children with acute respiratory distress syndrome, Pediatric Sepsis Biomarker Risk Model performed poorly and Pediatric Sepsis Biomarker Risk Model-II performed modestly (areas under receiver operating characteristic curve of 0.61 and 0.76, respectively). Therefore, we randomly selected 80% of the cohort (n = 122) to rederive a risk prediction model for pediatric acute respiratory distress syndrome. We used classification and regression tree methodology, considering the Pediatric Sepsis Biomarker Risk Model biomarkers in addition to variables relevant to acute respiratory distress syndrome. The final model was comprised of three biomarkers and age, and more accurately estimated baseline mortality risk (area under receiver operating characteristic curve 0.85, p < 0.001 and p = 0.053 compared with Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II, respectively). The model was tested in the remaining 20% of subjects (n = 30) and demonstrated similar test characteristics. A validated, biomarker-based risk stratification tool designed for pediatric sepsis was adapted for use in pediatric acute respiratory distress syndrome. The newly derived Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model demonstrates good test characteristics internally and requires external validation in a larger cohort. Tools such as Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model have the potential to provide improved risk stratification and prognostic enrichment for future trials in pediatric acute respiratory distress syndrome.

  20. The NASA Space Radiobiology Risk Assessment Project

    NASA Astrophysics Data System (ADS)

    Cucinotta, Francis A.; Huff, Janice; Ponomarev, Artem; Patel, Zarana; Kim, Myung-Hee

    The current first phase (2006-2011) has the three major goals of: 1) optimizing the conventional cancer risk models currently used based on the double-detriment life-table and radiation quality functions; 2) the integration of biophysical models of acute radiation syndromes; and 3) the development of new systems radiation biology models of cancer processes. The first-phase also includes continued uncertainty assessment of space radiation environmental models and transport codes, and relative biological effectiveness factors (RBE) based on flight data and NSRL results, respectively. The second phase of the (2012-2016) will: 1) develop biophysical models of central nervous system risks (CNS); 2) achieve comphrensive systems biology models of cancer processes using data from proton and heavy ion studies performed at NSRL; and 3) begin to identify computational models of biological countermeasures. Goals for the third phase (2017-2021) include: 1) the development of a systems biology model of cancer risks for operational use at NASA; 2) development of models of degenerative risks, 2) quantitative models of counter-measure impacts on cancer risks; and 3) indiviudal based risk assessments. Finally, we will support a decision point to continue NSRL research in support of NASA's exploration goals beyond 2021, and create an archival of NSRL research results for continued analysis. Details on near term goals, plans for a WEB based data resource of NSRL results, and a space radiation Wikepedia are described.

  1. A Risk and Maintenance Model for Bulimia Nervosa: From Impulsive Action to Compulsive Behavior

    PubMed Central

    Pearson, Carolyn M.; Wonderlich, Stephen A.; Smith, Gregory T.

    2015-01-01

    This paper offers a new model for bulimia nervosa (BN) that explains both the initial impulsive nature of binge eating and purging as well as the compulsive quality of the fully developed disorder. The model is based on a review of advances in research on BN and advances in relevant basic psychological science. It integrates transdiagnostic personality risk, eating disorder specific risk, reinforcement theory, cognitive neuroscience, and theory drawn from the drug addiction literature. We identify both a state-based and a trait-based risk pathway, and we then propose possible state-by-trait interaction risk processes. The state-based pathway emphasizes depletion of self-control. The trait-based pathway emphasizes transactions between the trait of negative urgency (the tendency to act rashly when distressed) and high-risk psychosocial learning. We then describe a process by which initially impulsive BN behaviors become compulsive over time, and we consider the clinical implications of our model. PMID:25961467

  2. [Joint application of mathematic models in assessing the residual risk of hepatitis C virus transmitted through blood transfusion].

    PubMed

    Wang, Xun; Jia, Yao; Xie, Yun-zheng; Li, Xiu-mei; Liu, Xiao-ying; Wu, Xiao-fei

    2011-09-01

    The practicable and effective methods for residual risk assessment on transfusion-transmitted disease was to establish the mathematic models. Based on the characteristics of the repeat donors which donated their blood on a regular base, a model of sero-conversion during the interval of donations was established to assess the incidence of the repeat donors. Based on the characteristics of the prevalence in the population, a model of 'prevalence increased with the age of the donor' was established to assess the incidence of those first-time donors. And based on the impact of the windows period through blood screening program, a model of residual risk associated with the incidence and the length of the windows period was established to assess the residual risk of blood transfusion. In this paper, above said 3 kinds of mathematic models were jointly applied to assess the residual risk of hepatitis C virus (HCV) which was transmitted through blood transfusion in Shanghai, based on data from the routine blood collection and screening program. All the anti-HCV unqualified blood donations were confirmed before assessment. Results showed that the residual risk of HCV transmitted through blood transfusion during Jan. 1(st), 2007 to Dec. 31(st), 2008 in Shanghai was 1:101 000. Data showed that the results of residual risk assessment with mathematic models was valuable. The residual risk of transfusion-transmitted HCV in Shanghai was at a safe level, according to the results in this paper.

  3. Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.

    PubMed

    Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J

    2015-01-01

    The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.

  4. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  5. Proposals for enhanced health risk assessment and stratification in an integrated care scenario

    PubMed Central

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-01-01

    Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274

  6. Prediction models for the risk of spontaneous preterm birth based on maternal characteristics: a systematic review and independent external validation.

    PubMed

    Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M

    2018-04-17

    Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  7. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    NASA Technical Reports Server (NTRS)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  8. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

    PubMed

    Chang, Hsien-Yen; Weiner, Jonathan P

    2010-01-18

    Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.

  9. University of North Carolina Caries Risk Assessment Study: comparisons of high risk prediction, any risk prediction, and any risk etiologic models.

    PubMed

    Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M

    1992-12-01

    The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.

  10. Modeling Individual Patient Preferences for Colorectal Cancer Screening Based on Their Tolerance for Complications Risk.

    PubMed

    Taksler, Glen B; Perzynski, Adam T; Kattan, Michael W

    2017-04-01

    Recommendations for colorectal cancer screening encourage patients to choose among various screening methods based on individual preferences for benefits, risks, screening frequency, and discomfort. We devised a model to illustrate how individuals with varying tolerance for screening complications risk might decide on their preferred screening strategy. We developed a discrete-time Markov mathematical model that allowed hypothetical individuals to maximize expected lifetime utility by selecting screening method, start age, stop age, and frequency. Individuals could choose from stool-based testing every 1 to 3 years, flexible sigmoidoscopy every 1 to 20 years with annual stool-based testing, colonoscopy every 1 to 20 years, or no screening. We compared the life expectancy gained from the chosen strategy with the life expectancy available from a benchmark strategy of decennial colonoscopy. For an individual at average risk of colorectal cancer who was risk neutral with respect to screening complications (and therefore was willing to undergo screening if it would actuarially increase life expectancy), the model predicted that he or she would choose colonoscopy every 10 years, from age 53 to 73 years, consistent with national guidelines. For a similar individual who was moderately averse to screening complications risk (and therefore required a greater increase in life expectancy to accept potential risks of colonoscopy), the model predicted that he or she would prefer flexible sigmoidoscopy every 12 years with annual stool-based testing, with 93% of the life expectancy benefit of decennial colonoscopy. For an individual with higher risk aversion, the model predicted that he or she would prefer 2 lifetime flexible sigmoidoscopies, 20 years apart, with 70% of the life expectancy benefit of decennial colonoscopy. Mathematical models may formalize how individuals with different risk attitudes choose between various guideline-recommended colorectal cancer screening strategies.

  11. Measuring the coupled risks: A copula-based CVaR model

    NASA Astrophysics Data System (ADS)

    He, Xubiao; Gong, Pu

    2009-01-01

    Integrated risk management for financial institutions requires an approach for aggregating risk types (such as market and credit) whose distributional shapes vary considerably. The financial institutions often ignore risks' coupling influence so as to underestimate the financial risks. We constructed a copula-based Conditional Value-at-Risk (CVaR) model for market and credit risks. This technique allows us to incorporate realistic marginal distributions that capture essential empirical features of these risks, such as skewness and fat-tails while allowing for a rich dependence structure. Finally, the numerical simulation method is used to implement the model. Our results indicate that the coupled risks for the listed company's stock maybe are undervalued if credit risk is ignored, especially for the listed company with bad credit quality.

  12. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  13. Adoption of Building Information Modelling in project planning risk management

    NASA Astrophysics Data System (ADS)

    Mering, M. M.; Aminudin, E.; Chai, C. S.; Zakaria, R.; Tan, C. S.; Lee, Y. Y.; Redzuan, A. A.

    2017-11-01

    An efficient and effective risk management required a systematic and proper methodology besides knowledge and experience. However, if the risk management is not discussed from the starting of the project, this duty is notably complicated and no longer efficient. This paper presents the adoption of Building Information Modelling (BIM) in project planning risk management. The objectives is to identify the traditional risk management practices and its function, besides, determine the best function of BIM in risk management and investigating the efficiency of adopting BIM-based risk management during the project planning phase. In order to obtain data, a quantitative approach is adopted in this research. Based on data analysis, the lack of compliance with project requirements and failure to recognise risk and develop responses to opportunity are the risks occurred when traditional risk management is implemented. When using BIM in project planning, it works as the tracking of cost control and cash flow give impact on the project cycle to be completed on time. 5D cost estimation or cash flow modeling benefit risk management in planning, controlling and managing budget and cost reasonably. There were two factors that mostly benefit a BIM-based technology which were formwork plan with integrated fall plan and design for safety model check. By adopting risk management, potential risks linked with a project and acknowledging to those risks can be identified to reduce them to an acceptable extent. This means recognizing potential risks and avoiding threat by reducing their negative effects. The BIM-based risk management can enhance the planning process of construction projects. It benefits the construction players in various aspects. It is important to know the application of BIM-based risk management as it can be a lesson learnt to others to implement BIM and increase the quality of the project.

  14. Mapping groundwater contamination risk of multiple aquifers using multi-model ensemble of machine learning algorithms.

    PubMed

    Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos

    2018-04-15

    Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Utility of genetic and non-genetic risk factors in prediction of type 2 diabetes: Whitehall II prospective cohort study.

    PubMed

    Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E

    2010-01-14

    To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.

  16. Modeling Joint Exposures and Health Outcomes for Cumulative Risk Assessment: The Case of Radon and Smoking

    PubMed Central

    Chahine, Teresa; Schultz, Bradley D.; Zartarian, Valerie G.; Xue, Jianping; Subramanian, SV; Levy, Jonathan I.

    2011-01-01

    Community-based cumulative risk assessment requires characterization of exposures to multiple chemical and non-chemical stressors, with consideration of how the non-chemical stressors may influence risks from chemical stressors. Residential radon provides an interesting case example, given its large attributable risk, effect modification due to smoking, and significant variability in radon concentrations and smoking patterns. In spite of this fact, no study to date has estimated geographic and sociodemographic patterns of both radon and smoking in a manner that would allow for inclusion of radon in community-based cumulative risk assessment. In this study, we apply multi-level regression models to explain variability in radon based on housing characteristics and geological variables, and construct a regression model predicting housing characteristics using U.S. Census data. Multi-level regression models of smoking based on predictors common to the housing model allow us to link the exposures. We estimate county-average lifetime lung cancer risks from radon ranging from 0.15 to 1.8 in 100, with high-risk clusters in areas and for subpopulations with high predicted radon and smoking rates. Our findings demonstrate the viability of screening-level assessment to characterize patterns of lung cancer risk from radon, with an approach that can be generalized to multiple chemical and non-chemical stressors. PMID:22016710

  17. Risk prediction models for selection of lung cancer screening candidates: A retrospective validation study

    PubMed Central

    ten Haaf, Kevin; Tammemägi, Martin C.; Han, Summer S.; Kong, Chung Yin; Plevritis, Sylvia K.; de Koning, Harry J.; Steyerberg, Ewout W.

    2017-01-01

    Background Selection of candidates for lung cancer screening based on individual risk has been proposed as an alternative to criteria based on age and cumulative smoking exposure (pack-years). Nine previously established risk models were assessed for their ability to identify those most likely to develop or die from lung cancer. All models considered age and various aspects of smoking exposure (smoking status, smoking duration, cigarettes per day, pack-years smoked, time since smoking cessation) as risk predictors. In addition, some models considered factors such as gender, race, ethnicity, education, body mass index, chronic obstructive pulmonary disease, emphysema, personal history of cancer, personal history of pneumonia, and family history of lung cancer. Methods and findings Retrospective analyses were performed on 53,452 National Lung Screening Trial (NLST) participants (1,925 lung cancer cases and 884 lung cancer deaths) and 80,672 Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) ever-smoking participants (1,463 lung cancer cases and 915 lung cancer deaths). Six-year lung cancer incidence and mortality risk predictions were assessed for (1) calibration (graphically) by comparing the agreement between the predicted and the observed risks, (2) discrimination (area under the receiver operating characteristic curve [AUC]) between individuals with and without lung cancer (death), and (3) clinical usefulness (net benefit in decision curve analysis) by identifying risk thresholds at which applying risk-based eligibility would improve lung cancer screening efficacy. To further assess performance, risk model sensitivities and specificities in the PLCO were compared to those based on the NLST eligibility criteria. Calibration was satisfactory, but discrimination ranged widely (AUCs from 0.61 to 0.81). The models outperformed the NLST eligibility criteria over a substantial range of risk thresholds in decision curve analysis, with a higher sensitivity for all models and a slightly higher specificity for some models. The PLCOm2012, Bach, and Two-Stage Clonal Expansion incidence models had the best overall performance, with AUCs >0.68 in the NLST and >0.77 in the PLCO. These three models had the highest sensitivity and specificity for predicting 6-y lung cancer incidence in the PLCO chest radiography arm, with sensitivities >79.8% and specificities >62.3%. In contrast, the NLST eligibility criteria yielded a sensitivity of 71.4% and a specificity of 62.2%. Limitations of this study include the lack of identification of optimal risk thresholds, as this requires additional information on the long-term benefits (e.g., life-years gained and mortality reduction) and harms (e.g., overdiagnosis) of risk-based screening strategies using these models. In addition, information on some predictor variables included in the risk prediction models was not available. Conclusions Selection of individuals for lung cancer screening using individual risk is superior to selection criteria based on age and pack-years alone. The benefits, harms, and feasibility of implementing lung cancer screening policies based on risk prediction models should be assessed and compared with those of current recommendations. PMID:28376113

  18. Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs

    PubMed Central

    Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.

    2004-01-01

    This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementation can accelerate encounter data submission, (2) administrative decisions made during implementation can create issues that impact payment model performance, and (3) changes in diagnosis data management during implementation may require changes to the payment model. PMID:25372356

  19. A microRNA-based prediction model for lymph node metastasis in hepatocellular carcinoma.

    PubMed

    Zhang, Li; Xiang, Zuo-Lin; Zeng, Zhao-Chong; Fan, Jia; Tang, Zhao-You; Zhao, Xiao-Mei

    2016-01-19

    We developed an efficient microRNA (miRNA) model that could predict the risk of lymph node metastasis (LNM) in hepatocellular carcinoma (HCC). We first evaluated a training cohort of 192 HCC patients after hepatectomy and found five LNM associated predictive factors: vascular invasion, Barcelona Clinic Liver Cancer stage, miR-145, miR-31, and miR-92a. The five statistically independent factors were used to develop a predictive model. The predictive value of the miRNA-based model was confirmed in a validation cohort of 209 consecutive HCC patients. The prediction model was scored for LNM risk from 0 to 8. The cutoff value 4 was used to distinguish high-risk and low-risk groups. The model sensitivity and specificity was 69.6 and 80.2%, respectively, during 5 years in the validation cohort. And the area under the curve (AUC) for the miRNA-based prognostic model was 0.860. The 5-year positive and negative predictive values of the model in the validation cohort were 30.3 and 95.5%, respectively. Cox regression analysis revealed that the LNM hazard ratio of the high-risk versus low-risk groups was 11.751 (95% CI, 5.110-27.021; P < 0.001) in the validation cohort. In conclusion, the miRNA-based model is reliable and accurate for the early prediction of LNM in patients with HCC.

  20. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    This draft report of Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. Topics covered include:the types of data required use of PBPK models in risk assessment,evaluation of PBPK models for use in risk assessment, andthe application of these models to address uncertainties resulting from extrapolations (e.g. interspecies extrapolation) often used in risk assessment.In addition, appendices are provided that includea compilation of chemical partition coefficients and rate constants,algorithms for estimating chemical-specific parameters, anda list of publications relating to PBPK modeling. This report is primarily meant to serve as a learning tool for EPA scientists and risk assessors who may be less familiar with the field. In addition, this report can be informative to PBPK modelers within and outside the Agency, as it provides an assessment of the types of data and models that the EPA requires for consideration of a model for use in risk assessment.

  1. Impact of model-based risk analysis for liver surgery planning.

    PubMed

    Hansen, C; Zidowitz, S; Preim, B; Stavrou, G; Oldhafer, K J; Hahn, H K

    2014-05-01

    A model-based risk analysis for oncologic liver surgery was described in previous work (Preim et al. in Proceedings of international symposium on computer assisted radiology and surgery (CARS), Elsevier, Amsterdam, pp. 353–358, 2002; Hansen et al. Int I Comput Assist Radiol Surg 4(5):469–474, 2009). In this paper, we present an evaluation of this method. To prove whether and how the risk analysis facilitates the process of liver surgery planning, an explorative user study with 10 liver experts was conducted. The purpose was to compare and analyze their decision-making. The results of the study show that model-based risk analysis enhances the awareness of surgical risk in the planning stage. Participants preferred smaller resection volumes and agreed more on the safety margins’ width in case the risk analysis was available. In addition, time to complete the planning task and confidence of participants were not increased when using the risk analysis. This work shows that the applied model-based risk analysis may influence important planning decisions in liver surgery. It lays a basis for further clinical evaluations and points out important fields for future research.

  2. A Risk-Analysis Approach to Implementing Web-Based Assessment

    ERIC Educational Resources Information Center

    Ricketts, Chris; Zakrzewski, Stan

    2005-01-01

    Computer-Based Assessment is a risky business. This paper proposes the use of a model for web-based assessment systems that identifies pedagogic, operational, technical (non web-based), web-based and financial risks. The strategies and procedures for risk elimination or reduction arise from risk analysis and management and are the means by which…

  3. Development of Rock Engineering Systems-Based Models for Flyrock Risk Analysis and Prediction of Flyrock Distance in Surface Blasting

    NASA Astrophysics Data System (ADS)

    Faramarzi, Farhad; Mansouri, Hamid; Farsangi, Mohammad Ali Ebrahimi

    2014-07-01

    The environmental effects of blasting must be controlled in order to comply with regulatory limits. Because of safety concerns and risk of damage to infrastructures, equipment, and property, and also having a good fragmentation, flyrock control is crucial in blasting operations. If measures to decrease flyrock are taken, then the flyrock distance would be limited, and, in return, the risk of damage can be reduced or eliminated. This paper deals with modeling the level of risk associated with flyrock and, also, flyrock distance prediction based on the rock engineering systems (RES) methodology. In the proposed models, 13 effective parameters on flyrock due to blasting are considered as inputs, and the flyrock distance and associated level of risks as outputs. In selecting input data, the simplicity of measuring input data was taken into account as well. The data for 47 blasts, carried out at the Sungun copper mine, western Iran, were used to predict the level of risk and flyrock distance corresponding to each blast. The obtained results showed that, for the 47 blasts carried out at the Sungun copper mine, the level of estimated risks are mostly in accordance with the measured flyrock distances. Furthermore, a comparison was made between the results of the flyrock distance predictive RES-based model, the multivariate regression analysis model (MVRM), and, also, the dimensional analysis model. For the RES-based model, R 2 and root mean square error (RMSE) are equal to 0.86 and 10.01, respectively, whereas for the MVRM and dimensional analysis, R 2 and RMSE are equal to (0.84 and 12.20) and (0.76 and 13.75), respectively. These achievements confirm the better performance of the RES-based model over the other proposed models.

  4. A Corrosion Risk Assessment Model for Underground Piping

    NASA Technical Reports Server (NTRS)

    Datta, Koushik; Fraser, Douglas R.

    2009-01-01

    The Pressure Systems Manager at NASA Ames Research Center (ARC) has embarked on a project to collect data and develop risk assessment models to support risk-informed decision making regarding future inspections of underground pipes at ARC. This paper shows progress in one area of this project - a corrosion risk assessment model for the underground high-pressure air distribution piping system at ARC. It consists of a Corrosion Model of pipe-segments, a Pipe Wrap Protection Model; and a Pipe Stress Model for a pipe segment. A Monte Carlo simulation of the combined models provides a distribution of the failure probabilities. Sensitivity study results show that the model uncertainty, or lack of knowledge, is the dominant contributor to the calculated unreliability of the underground piping system. As a result, the Pressure Systems Manager may consider investing resources specifically focused on reducing these uncertainties. Future work includes completing the data collection effort for the existing ground based pressure systems and applying the risk models to risk-based inspection strategies of the underground pipes at ARC.

  5. Laboratory-based versus non-laboratory-based method for assessment of cardiovascular disease risk: the NHANES I Follow-up Study cohort

    PubMed Central

    Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael

    2008-01-01

    Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687

  6. How TK-TD and population models for aquatic macrophytes could support the risk assessment for plant protection products.

    PubMed

    Hommen, Udo; Schmitt, Walter; Heine, Simon; Brock, Theo Cm; Duquesne, Sabine; Manson, Phil; Meregalli, Giovanna; Ochoa-Acuña, Hugo; van Vliet, Peter; Arts, Gertie

    2016-01-01

    This case study of the Society of Environmental Toxicology and Chemistry (SETAC) workshop MODELINK demonstrates the potential use of mechanistic effects models for macrophytes to extrapolate from effects of a plant protection product observed in laboratory tests to effects resulting from dynamic exposure on macrophyte populations in edge-of-field water bodies. A standard European Union (EU) risk assessment for an example herbicide based on macrophyte laboratory tests indicated risks for several exposure scenarios. Three of these scenarios are further analyzed using effect models for 2 aquatic macrophytes, the free-floating standard test species Lemna sp., and the sediment-rooted submerged additional standard test species Myriophyllum spicatum. Both models include a toxicokinetic (TK) part, describing uptake and elimination of the toxicant, a toxicodynamic (TD) part, describing the internal concentration-response function for growth inhibition, and a description of biomass growth as a function of environmental factors to allow simulating seasonal dynamics. The TK-TD models are calibrated and tested using laboratory tests, whereas the growth models were assumed to be fit for purpose based on comparisons of predictions with typical growth patterns observed in the field. For the risk assessment, biomass dynamics are predicted for the control situation and for several exposure levels. Based on specific protection goals for macrophytes, preliminary example decision criteria are suggested for evaluating the model outputs. The models refined the risk indicated by lower tier testing for 2 exposure scenarios, while confirming the risk associated for the third. Uncertainties related to the experimental and the modeling approaches and their application in the risk assessment are discussed. Based on this case study and the assumption that the models prove suitable for risk assessment once fully evaluated, we recommend that 1) ecological scenarios be developed that are also linked to the exposure scenarios, and 2) quantitative protection goals be set to facilitate the interpretation of model results for risk assessment. © 2015 SETAC.

  7. Innovative Models of Dental Care Delivery and Coverage: Patient-Centric Dental Benefits Based on Digital Oral Health Risk Assessment.

    PubMed

    Martin, John; Mills, Shannon; Foley, Mary E

    2018-04-01

    Innovative models of dental care delivery and coverage are emerging across oral health care systems causing changes to treatment and benefit plans. A novel addition to these models is digital risk assessment, which offers a promising new approach that incorporates the use of a cloud-based technology platform to assess an individual patient's risk for oral disease. Risk assessment changes treatment by including risk as a modifier of treatment and as a determinant of preventive services. Benefit plans are being developed to use risk assessment to predetermine preventive benefits for patients identified at elevated risk for oral disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Launch Vehicle Debris Models and Crew Vehicle Ascent Abort Risk

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott

    2013-01-01

    For manned space launch systems, a reliable abort system is required to reduce the risks associated with a launch vehicle failure during ascent. Understanding the risks associated with failure environments can be achieved through the use of physics-based models of these environments. Debris fields due to destruction of the launch vehicle is one such environment. To better analyze the risk posed by debris, a physics-based model for generating launch vehicle debris catalogs has been developed. The model predicts the mass distribution of the debris field based on formulae developed from analysis of explosions. Imparted velocity distributions are computed using a shock-physics code to model the explosions within the launch vehicle. A comparison of the debris catalog with an existing catalog for the Shuttle external tank show good comparison in the debris characteristics and the predicted debris strike probability. The model is used to analyze the effects of number of debris pieces and velocity distributions on the strike probability and risk.

  9. A Review on Automatic Mammographic Density and Parenchymal Segmentation

    PubMed Central

    He, Wenda; Juette, Arne; Denton, Erika R. E.; Oliver, Arnau

    2015-01-01

    Breast cancer is the most frequently diagnosed cancer in women. However, the exact cause(s) of breast cancer still remains unknown. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective way to tackle breast cancer. There are more than 70 common genetic susceptibility factors included in the current non-image-based risk prediction models (e.g., the Gail and the Tyrer-Cuzick models). Image-based risk factors, such as mammographic densities and parenchymal patterns, have been established as biomarkers but have not been fully incorporated in the risk prediction models used for risk stratification in screening and/or measuring responsiveness to preventive approaches. Within computer aided mammography, automatic mammographic tissue segmentation methods have been developed for estimation of breast tissue composition to facilitate mammographic risk assessment. This paper presents a comprehensive review of automatic mammographic tissue segmentation methodologies developed over the past two decades and the evidence for risk assessment/density classification using segmentation. The aim of this review is to analyse how engineering advances have progressed and the impact automatic mammographic tissue segmentation has in a clinical environment, as well as to understand the current research gaps with respect to the incorporation of image-based risk factors in non-image-based risk prediction models. PMID:26171249

  10. Improving risk prediction accuracy for new soldiers in the U.S. Army by adding self-report survey data to administrative data.

    PubMed

    Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C

    2018-04-03

    High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.

  11. Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng

    The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.

  12. A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.

    PubMed

    Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J

    2017-08-01

    The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.

  13. Proposals for enhanced health risk assessment and stratification in an integrated care scenario.

    PubMed

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-04-15

    Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Responsible teams for regional data management in the five ACT regions. We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2010-01-01

    This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.

  15. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  16. Assessment of multi-wildfire occurrence data for machine learning based risk modelling

    NASA Astrophysics Data System (ADS)

    Lim, C. H.; Kim, M.; Kim, S. J.; Yoo, S.; Lee, W. K.

    2017-12-01

    The occurrence of East Asian wildfires is mainly caused by human-activities, but the extreme drought increased due to the climate change caused wildfires and they spread to large-scale fires. Accurate occurrence location data is required for modelling wildfire probability and risk. In South Korea, occurrence data surveyed through KFS (Korea Forest Service) and MODIS (MODerate-resolution Imaging Spectroradiometer) satellite-based active fire data can be utilized. In this study, two sorts of wildfire occurrence data were applied to select suitable occurrence data for machine learning based wildfire risk modelling. MaxEnt (Maximum Entropy) model based on machine learning is used for wildfire risk modelling, and two types of occurrence data and socio-economic and climate-environment data are applied to modelling. In the results with KFS survey based data, the low relationship was shown with climate-environmental factors, and the uncertainty of coordinate information appeared. The MODIS-based active fire data were found outside the forests, and there were a lot of spots that did not match the actual wildfires. In order to utilize MODIS-based active fire data, it was necessary to extract forest area and utilize only high-confidence level data. In KFS data, it was necessary to separate the analysis according to the damage scale to improve the modelling accuracy. Ultimately, it is considered to be the best way to simulate the wildfire risk by constructing more accurate information by combining two sorts of wildfire occurrence data.

  17. An evaluation of Computational Fluid dynamics model for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Di Francesco, Silvia; Biscarini, Chiara; Montesarchio, Valeria

    2014-05-01

    This work presents an analysis of the hydrological-hydraulic engineering requisites for Risk evaluation and efficient flood damage reduction plans. Most of the research efforts have been dedicated to the scientific and technical aspects of risk assessment, providing estimates of possible alternatives and of the risk associated. In the decision making process for mitigation plan, the contribute of scientist is crucial, due to the fact that Risk-Damage analysis is based on evaluation of flow field ,of Hydraulic Risk and on economical and societal considerations. The present paper will focus on the first part of process, the mathematical modelling of flood events which is the base for all further considerations. The evaluation of potential catastrophic damage consequent to a flood event and in particular to dam failure requires modelling of the flood with sufficient detail so to capture the spatial and temporal evolutions of the event, as well of the velocity field. Thus, the selection of an appropriate mathematical model to correctly simulate flood routing is an essential step. In this work we present the application of two 3D Computational fluid dynamics models to a synthetic and real case study in order to evaluate the correct evolution of flow field and the associated flood Risk . The first model is based on a opensource CFD platform called openFoam. Water flow is schematized with a classical continuum approach based on Navier-Stokes equation coupled with Volume of fluid (VOF) method to take in account the multiphase character of river bottom-water- air systems. The second model instead is based on the Lattice Boltzmann method, an innovative numerical fluid dynamics scheme based on Boltzmann's kinetic equation that represents the flow dynamics at the macroscopic level by incorporating a microscopic kinetic approach. Fluid is seen as composed by particles that can move and collide among them. Simulation results from both models are promising and congruent to experimental results available in literature, thought the LBM model requires less computational effort respect to the NS one.

  18. Bayesian-network-based safety risk assessment for steel construction projects.

    PubMed

    Leu, Sou-Sen; Chang, Ching-Miao

    2013-05-01

    There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  20. Approaches for Increasing Acceptance of Physiologically Based Pharmacokinetic Models in Public Health Risk Assessment

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) models have great potential for application in regulatory and non-regulatory public health risk assessment. The development and application of PBPK models in chemical toxicology has grown steadily since their emergence in the 1980s. Ho...

  1. Enhancing the Value of Population-Based Risk Scores for Institutional-Level Use.

    PubMed

    Raza, Sajjad; Sabik, Joseph F; Rajeswaran, Jeevanantham; Idrees, Jay J; Trezzi, Matteo; Riaz, Haris; Javadikasgari, Hoda; Nowicki, Edward R; Svensson, Lars G; Blackstone, Eugene H

    2016-07-01

    We hypothesized that factors associated with an institution's residual risk unaccounted for by population-based models may be identifiable and used to enhance the value of population-based risk scores for quality improvement. From January 2000 to January 2010, 4,971 patients underwent aortic valve replacement (AVR), either isolated (n = 2,660) or with concomitant coronary artery bypass grafting (AVR+CABG; n = 2,311). Operative mortality and major morbidity and mortality predicted by The Society of Thoracic Surgeons (STS) risk models were compared with observed values. After adjusting for patients' STS score, additional and refined risk factors were sought to explain residual risk. Differences between STS model coefficients (risk-factor strength) and those specific to our institution were calculated. Observed operative mortality was less than predicted for AVR (1.6% [42 of 2,660] vs 2.8%, p < 0.0001) and AVR+CABG (2.6% [59 of 2,311] vs 4.9%, p < 0.0001). Observed major morbidity and mortality was also lower than predicted for isolated AVR (14.6% [389 of 2,660] vs 17.5%, p < 0.0001) and AVR+CABG (20.0% [462 of 2,311] vs 25.8%, p < 0.0001). Shorter height, higher bilirubin, and lower albumin were identified as additional institution-specific risk factors, and body surface area, creatinine, glomerular filtration rate, blood urea nitrogen, and heart failure across all levels of functional class were identified as refined risk-factor variables associated with residual risk. In many instances, risk-factor strength differed substantially from that of STS models. Scores derived from population-based models can be enhanced for institutional level use by adjusting for institution-specific additional and refined risk factors. Identifying these and measuring differences in institution-specific versus population-based risk-factor strength can identify areas to target for quality improvement initiatives. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. A Framework for Widespread Replication of a Highly Spatially Resolved Childhood Lead Exposure Risk Model

    PubMed Central

    Kim, Dohyeong; Galeano, M. Alicia Overstreet; Hull, Andrew; Miranda, Marie Lynn

    2008-01-01

    Background Preventive approaches to childhood lead poisoning are critical for addressing this longstanding environmental health concern. Moreover, increasing evidence of cognitive effects of blood lead levels < 10 μg/dL highlights the need for improved exposure prevention interventions. Objectives Geographic information system–based childhood lead exposure risk models, especially if executed at highly resolved spatial scales, can help identify children most at risk of lead exposure, as well as prioritize and direct housing and health-protective intervention programs. However, developing highly resolved spatial data requires labor-and time-intensive geocoding and analytical processes. In this study we evaluated the benefit of increased effort spent geocoding in terms of improved performance of lead exposure risk models. Methods We constructed three childhood lead exposure risk models based on established methods but using different levels of geocoded data from blood lead surveillance, county tax assessors, and the 2000 U.S. Census for 18 counties in North Carolina. We used the results to predict lead exposure risk levels mapped at the individual tax parcel unit. Results The models performed well enough to identify high-risk areas for targeted intervention, even with a relatively low level of effort on geocoding. Conclusions This study demonstrates the feasibility of widespread replication of highly spatially resolved childhood lead exposure risk models. The models guide resource-constrained local health and housing departments and community-based organizations on how best to expend their efforts in preventing and mitigating lead exposure risk in their communities. PMID:19079729

  3. A point-based prediction model for cardiovascular risk in orthotopic liver transplantation: The CAR-OLT score.

    PubMed

    VanWagner, Lisa B; Ning, Hongyan; Whitsett, Maureen; Levitsky, Josh; Uttal, Sarah; Wilkins, John T; Abecassis, Michael M; Ladner, Daniela P; Skaro, Anton I; Lloyd-Jones, Donald M

    2017-12-01

    Cardiovascular disease (CVD) complications are important causes of morbidity and mortality after orthotopic liver transplantation (OLT). There is currently no preoperative risk-assessment tool that allows physicians to estimate the risk for CVD events following OLT. We sought to develop a point-based prediction model (risk score) for CVD complications after OLT, the Cardiovascular Risk in Orthotopic Liver Transplantation risk score, among a cohort of 1,024 consecutive patients aged 18-75 years who underwent first OLT in a tertiary-care teaching hospital (2002-2011). The main outcome measures were major 1-year CVD complications, defined as death from a CVD cause or hospitalization for a major CVD event (myocardial infarction, revascularization, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism, and/or stroke). The bootstrap method yielded bias-corrected 95% confidence intervals for the regression coefficients of the final model. Among 1,024 first OLT recipients, major CVD complications occurred in 329 (32.1%). Variables selected for inclusion in the model (using model optimization strategies) included preoperative recipient age, sex, race, employment status, education status, history of hepatocellular carcinoma, diabetes, heart failure, atrial fibrillation, pulmonary or systemic hypertension, and respiratory failure. The discriminative performance of the point-based score (C statistic = 0.78, bias-corrected C statistic = 0.77) was superior to other published risk models for postoperative CVD morbidity and mortality, and it had appropriate calibration (Hosmer-Lemeshow P = 0.33). The point-based risk score can identify patients at risk for CVD complications after OLT surgery (available at www.carolt.us); this score may be useful for identification of candidates for further risk stratification or other management strategies to improve CVD outcomes after OLT. (Hepatology 2017;66:1968-1979). © 2017 by the American Association for the Study of Liver Diseases.

  4. Appendix 2: Risk-based framework and risk case studies. Risk Assessment for two bird species in northern Wisconsin.

    Treesearch

    Megan M. Friggens; Stephen N. Matthews

    2012-01-01

    Species distribution models for 147 bird species have been derived using climate, elevation, and distribution of current tree species as potential predictors (Matthews et al. 2011). In this case study, a risk matrix was developed for two bird species (fig. A2-5), with projected change in bird habitat (the x axis) based on models of changing suitable habitat resulting...

  5. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how GCR event rates mapped to biological signaling induction and relaxation times. We considered several hypotheses related to signaling and cancer risk, and then performed simulations for conditions where aberrant or adaptive signaling would occur on long-duration space mission. Our results do not support the conventional assumptions of dose, linearity and additivity. A discussion on how event-based systems biology models, which focus on biological signaling as the mechanism to propagate damage or adaptation, can be further developed for cancer and CNS space radiation risk projections is given.

  6. Influence of Methylenetetrahydrofolate Reductase C677T Polymorphism on the Risk of Lung Cancer and the Clinical Response to Platinum-Based Chemotherapy for Advanced Non-Small Cell Lung Cancer: An Updated Meta-Analysis

    PubMed Central

    Zhu, Ning; Gong, Yi; He, Jian; Xia, Jingwen

    2013-01-01

    Purpose Methylenetetrahydrofolate reductase (MTHFR) has been implicated in lung cancer risk and response to platinum-based chemotherapy in advanced non-small cell lung cancer (NSCLC). However, the results are controversial. We performed meta-analysis to investigate the effect of MTHFR C677T polymorphism on lung cancer risk and response to platinum-based chemotherapy in advanced NSCLC. Materials and Methods The databases of PubMed, Ovid, Wanfang and Chinese Biomedicine were searched for eligible studies. Nineteen studies on MTHFR C677T polymorphism and lung cancer risk and three articles on C677T polymorphism and response to platinum-based chemotherapy in advanced NSCLC, were identified. Results The results indicated that the allelic contrast, homozygous contrast and recessive model of the MTHFR C677T polymorphism were associated significantly with increased lung cancer risk. In the subgroup analysis, the C677T polymorphism was significantly correlated with an increased risk of NSCLC, with the exception of the recessive model. The dominant model and the variant T allele showed a significant association with lung cancer susceptibility of ever smokers. Male TT homozygote carriers had a higher susceptibility, but the allelic contrast and homozygote model had a protective effect in females. No relationship was observed for SCLC in any comparison model. In addition, MTHFR 677TT homozygote carriers had a better response to platinum-based chemotherapy in advanced NSCLC in the recessive model. Conclusion The MTHFR C677T polymorphism might be a genetic marker for lung cancer risk or response to platinum-based chemotherapy in advanced NSCLC. However, our results require further verification. PMID:24142642

  7. Source-to-Outcome Microbial Exposure and Risk Modeling Framework

    EPA Science Inventory

    A Quantitative Microbial Risk Assessment (QMRA) is a computer-based data-delivery and modeling approach that integrates interdisciplinary fate/transport, exposure, and impact models and databases to characterize potential health impacts/risks due to pathogens. As such, a QMRA ex...

  8. A Markov chain model for studying suicide dynamics: an illustration of the Rose theorem

    PubMed Central

    2014-01-01

    Background High-risk strategies would only have a modest effect on suicide prevention within a population. It is best to incorporate both high-risk and population-based strategies to prevent suicide. This study aims to compare the effectiveness of suicide prevention between high-risk and population-based strategies. Methods A Markov chain illness and death model is proposed to determine suicide dynamic in a population and examine its effectiveness for reducing the number of suicides by modifying certain parameters of the model. Assuming a population with replacement, the suicide risk of the population was estimated by determining the final state of the Markov model. Results The model shows that targeting the whole population for suicide prevention is more effective than reducing risk in the high-risk tail of the distribution of psychological distress (i.e. the mentally ill). Conclusions The results of this model reinforce the essence of the Rose theorem that lowering the suicidal risk in the population at large may be more effective than reducing the high risk in a small population. PMID:24948330

  9. A risk factor-based predictive model of outcomes in carotid endarterectomy: the National Surgical Quality Improvement Program 2005-2010.

    PubMed

    Bekelis, Kimon; Bakhoum, Samuel F; Desai, Atman; Mackenzie, Todd A; Goodney, Philip; Labropoulos, Nicos

    2013-04-01

    Accurate knowledge of individualized risks and benefits is crucial to the surgical management of patients undergoing carotid endarterectomy (CEA). Although large randomized trials have determined specific cutoffs for the degree of stenosis, precise delineation of patient-level risks remains a topic of debate, especially in real world practice. We attempted to create a risk factor-based predictive model of outcomes in CEA. We performed a retrospective cohort study involving patients who underwent CEAs from 2005 to 2010 and were registered in the American College of Surgeons National Quality Improvement Project database. Of the 35 698 patients, 20 015 were asymptomatic (56.1%) and 15 683 were symptomatic (43.9%). These patients demonstrated a 1.64% risk of stroke, 0.69% risk of myocardial infarction, and 0.75% risk of death within 30 days after CEA. Multivariate analysis demonstrated that increasing age, male sex, history of chronic obstructive pulmonary disease, myocardial infarction, angina, congestive heart failure, peripheral vascular disease, previous stroke or transient ischemic attack, and dialysis were independent risk factors associated with an increased risk of the combined outcome of postoperative stroke, myocardial infarction, or death. A validated model for outcome prediction based on individual patient characteristics was developed. There was a steep effect of age on the risk of myocardial infarction and death. This national study confirms that that risks of CEA vary dramatically based on patient-level characteristics. Because of limited discrimination, it cannot be used for individual patient risk assessment. However, it can be used as a baseline for improvement and development of more accurate predictive models based on other databases or prospective studies.

  10. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  11. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  12. Risk assessment of storm surge disaster based on numerical models and remote sensing

    NASA Astrophysics Data System (ADS)

    Liu, Qingrong; Ruan, Chengqing; Zhong, Shan; Li, Jian; Yin, Zhonghui; Lian, Xihu

    2018-06-01

    Storm surge is one of the most serious ocean disasters in the world. Risk assessment of storm surge disaster for coastal areas has important implications for planning economic development and reducing disaster losses. Based on risk assessment theory, this paper uses coastal hydrological observations, a numerical storm surge model and multi-source remote sensing data, proposes methods for valuing hazard and vulnerability for storm surge and builds a storm surge risk assessment model. Storm surges in different recurrence periods are simulated in numerical models and the flooding areas and depth are calculated, which are used for assessing the hazard of storm surge; remote sensing data and GIS technology are used for extraction of coastal key objects and classification of coastal land use are identified, which is used for vulnerability assessment of storm surge disaster. The storm surge risk assessment model is applied for a typical coastal city, and the result shows the reliability and validity of the risk assessment model. The building and application of storm surge risk assessment model provides some basis reference for the city development plan and strengthens disaster prevention and mitigation.

  13. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  14. Uncertainty and Variability in Physiologically-Based ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment.

  15. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  16. Comparing predictions of extinction risk using models and subjective judgement

    NASA Astrophysics Data System (ADS)

    McCarthy, Michael A.; Keith, David; Tietjen, Justine; Burgman, Mark A.; Maunder, Mark; Master, Larry; Brook, Barry W.; Mace, Georgina; Possingham, Hugh P.; Medellin, Rodrigo; Andelman, Sandy; Regan, Helen; Regan, Tracey; Ruckelshaus, Mary

    2004-10-01

    Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models.

  17. Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Dalton, Angela C.; Dale, Crystal

    2014-06-01

    Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less

  18. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  19. Predicting Risk of Type 2 Diabetes Mellitus with Genetic Risk Models on the Basis of Established Genome-wide Association Markers: A Systematic Review

    PubMed Central

    Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin

    2013-01-01

    This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910

  20. Scientific reporting is suboptimal for aspects that characterize genetic risk prediction studies: a review of published articles based on the Genetic RIsk Prediction Studies statement.

    PubMed

    Iglesias, Adriana I; Mihaescu, Raluca; Ioannidis, John P A; Khoury, Muin J; Little, Julian; van Duijn, Cornelia M; Janssens, A Cecile J W

    2014-05-01

    Our main objective was to raise awareness of the areas that need improvements in the reporting of genetic risk prediction articles for future publications, based on the Genetic RIsk Prediction Studies (GRIPS) statement. We evaluated studies that developed or validated a prediction model based on multiple DNA variants, using empirical data, and were published in 2010. A data extraction form based on the 25 items of the GRIPS statement was created and piloted. Forty-two studies met our inclusion criteria. Overall, more than half of the evaluated items (34 of 62) were reported in at least 85% of included articles. Seventy-seven percentage of the articles were identified as genetic risk prediction studies through title assessment, but only 31% used the keywords recommended by GRIPS in the title or abstract. Seventy-four percentage mentioned which allele was the risk variant. Overall, only 10% of the articles reported all essential items needed to perform external validation of the risk model. Completeness of reporting in genetic risk prediction studies is adequate for general elements of study design but is suboptimal for several aspects that characterize genetic risk prediction studies such as description of the model construction. Improvements in the transparency of reporting of these aspects would facilitate the identification, replication, and application of genetic risk prediction models. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A prognostic index for natural killer cell lymphoma after non-anthracycline-based treatment: a multicentre, retrospective analysis.

    PubMed

    Kim, Seok Jin; Yoon, Dok Hyun; Jaccard, Arnaud; Chng, Wee Joo; Lim, Soon Thye; Hong, Huangming; Park, Yong; Chang, Kian Meng; Maeda, Yoshinobu; Ishida, Fumihiro; Shin, Dong-Yeop; Kim, Jin Seok; Jeong, Seong Hyun; Yang, Deok-Hwan; Jo, Jae-Cheol; Lee, Gyeong-Won; Choi, Chul Won; Lee, Won-Sik; Chen, Tsai-Yun; Kim, Kiyeun; Jung, Sin-Ho; Murayama, Tohru; Oki, Yasuhiro; Advani, Ranjana; d'Amore, Francesco; Schmitz, Norbert; Suh, Cheolwon; Suzuki, Ritsuro; Kwong, Yok Lam; Lin, Tong-Yu; Kim, Won Seog

    2016-03-01

    The clinical outcome of extranodal natural killer T-cell lymphoma (ENKTL) has improved substantially as a result of new treatment strategies with non-anthracycline-based chemotherapies and upfront use of concurrent chemoradiotherapy or radiotherapy. A new prognostic model based on the outcomes obtained with these contemporary treatments was warranted. We did a retrospective study of patients with newly diagnosed ENKTL without any previous treatment history for the disease who were given non-anthracycline-based chemotherapies with or without upfront concurrent chemoradiotherapy or radiotherapy with curative intent. A prognostic model to predict overall survival and progression-free survival on the basis of pretreatment clinical and laboratory characteristics was developed by filling a multivariable model on the basis of the dataset with complete data for the selected risk factors for an unbiased prediction model. The final model was applied to the patients who had complete data for the selected risk factors. We did a validation analysis of the prognostic model in an independent cohort. We did multivariate analyses of 527 patients who were included from 38 hospitals in 11 countries in the training cohort. Analyses showed that age greater than 60 years, stage III or IV disease, distant lymph-node involvement, and non-nasal type disease were significantly associated with overall survival and progression-free survival. We used these data as the basis for the prognostic index of natural killer lymphoma (PINK), in which patients are stratified into low-risk (no risk factors), intermediate-risk (one risk factor), or high-risk (two or more risk factors) groups, which were associated with 3-year overall survival of 81% (95% CI 75-86), 62% (55-70), and 25% (20-34), respectively. In the 328 patients with data for Epstein-Barr virus DNA, a detectable viral DNA titre was an independent prognostic factor for overall survival. When these data were added to PINK as the basis for another prognostic index (PINK-E)-which had similar low-risk (zero or one risk factor), intermediate-risk (two risk factors), and high-risk (three or more risk factors) categories-significant associations with overall survival were noted (81% [95% CI 75-87%], 55% (44-66), and 28% (18-40%), respectively). These results were validated and confirmed in an independent cohort, although the PINK-E model was only significantly associated with the high-risk group compared with the low-risk group. PINK and PINK-E are new prognostic models that can be used to develop risk-adapted treatment approaches for patients with ENKTL being treated in the contemporary era of non-anthracycline-based therapy. Samsung Biomedical Research Institute. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  3. Multifactorial disease risk calculator: Risk prediction for multifactorial disease pedigrees.

    PubMed

    Campbell, Desmond D; Li, Yiming; Sham, Pak C

    2018-03-01

    Construction of multifactorial disease models from epidemiological findings and their application to disease pedigrees for risk prediction is nontrivial for all but the simplest of cases. Multifactorial Disease Risk Calculator is a web tool facilitating this. It provides a user-friendly interface, extending a reported methodology based on a liability-threshold model. Multifactorial disease models incorporating all the following features in combination are handled: quantitative risk factors (including polygenic scores), categorical risk factors (including major genetic risk loci), stratified age of onset curves, and the partition of the population variance in disease liability into genetic, shared, and unique environment effects. It allows the application of such models to disease pedigrees. Pedigree-related outputs are (i) individual disease risk for pedigree members, (ii) n year risk for unaffected pedigree members, and (iii) the disease pedigree's joint liability distribution. Risk prediction for each pedigree member is based on using the constructed disease model to appropriately weigh evidence on disease risk available from personal attributes and family history. Evidence is used to construct the disease pedigree's joint liability distribution. From this, lifetime and n year risk can be predicted. Example disease models and pedigrees are provided at the website and are used in accompanying tutorials to illustrate the features available. The website is built on an R package which provides the functionality for pedigree validation, disease model construction, and risk prediction. Website: http://grass.cgs.hku.hk:3838/mdrc/current. © 2017 WILEY PERIODICALS, INC.

  4. A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning

    PubMed Central

    2018-01-01

    Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968

  5. Clinical risk assessment of patients with chronic kidney disease by using clinical data and multivariate models.

    PubMed

    Chen, Zewei; Zhang, Xin; Zhang, Zhuoyong

    2016-12-01

    Timely risk assessment of chronic kidney disease (CKD) and proper community-based CKD monitoring are important to prevent patients with potential risk from further kidney injuries. As many symptoms are associated with the progressive development of CKD, evaluating risk of CKD through a set of clinical data of symptoms coupled with multivariate models can be considered as an available method for prevention of CKD and would be useful for community-based CKD monitoring. Three common used multivariate models, i.e., K-nearest neighbor (KNN), support vector machine (SVM), and soft independent modeling of class analogy (SIMCA), were used to evaluate risk of 386 patients based on a series of clinical data taken from UCI machine learning repository. Different types of composite data, in which proportional disturbances were added to simulate measurement deviations caused by environment and instrument noises, were also utilized to evaluate the feasibility and robustness of these models in risk assessment of CKD. For the original data set, three mentioned multivariate models can differentiate patients with CKD and non-CKD with the overall accuracies over 93 %. KNN and SVM have better performances than SIMCA has in this study. For the composite data set, SVM model has the best ability to tolerate noise disturbance and thus are more robust than the other two models. Using clinical data set on symptoms coupled with multivariate models has been proved to be feasible approach for assessment of patient with potential CKD risk. SVM model can be used as useful and robust tool in this study.

  6. The Zero Suicide Model: Applying Evidence-Based Suicide Prevention Practices to Clinical Care

    PubMed Central

    Brodsky, Beth S.; Spruch-Feiner, Aliza; Stanley, Barbara

    2018-01-01

    Suicide is reaching epidemic proportions, with over 44,000 deaths by suicide in the US, and 800,000 worldwide in 2015. This, despite research and development of evidence-based interventions that target suicidal behavior directly. Suicide prevention efforts need a comprehensive approach, and research must lead to effective implementation across public and mental health systems. A 10-year systematic review of evidence-based findings in suicide prevention summarized the areas necessary for translating research into practice. These include risk assessment, means restriction, evidence-based treatments, population screening combined with chain of care, monitoring, and follow-up. In this article, we review how suicide prevention research informs implementation in clinical settings where those most at risk present for care. Evidence-based and best practices address the fluctuating nature of suicide risk, which requires ongoing risk assessment, direct intervention and monitoring. In the US, the National Action Alliance for Suicide Prevention has put forth the Zero Suicide (ZS) Model, a framework to coordinate a multilevel approach to implementing evidence-based practices. We present the Assess, Intervene and Monitor for Suicide Prevention model (AIM-SP) as a guide for implementation of ZS evidence-based and best practices in clinical settings. Ten basic steps for clinical management model will be described and illustrated through case vignette. These steps are designed to be easily incorporated into standard clinical practice to enhance suicide risk assessment, brief interventions to increase safety and teach coping strategies and to improve ongoing contact and monitoring of high-risk individuals during transitions in care and high risk periods. PMID:29527178

  7. Proceedings of the 2006 Toxicology and Risk Assessment Conference: Applying Mode of Action in Risk Assessment

    DTIC Science & Technology

    2006-07-01

    physiologically-based pharmacokinetic modeling of interactions and multiple route exposure assessment; and integrating relative potency factors with response...defaults, while at the other end is the use of extensive chemical-specific data in physiologically based pharmacokinetic (PBPK) modeling or even...for internal dosimetry as well as an in depth prospective on the use and limitations of physiologically based pharmacokinetic (PBPK) models in

  8. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  9. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  10. Semicompeting risks in aging research: methods, issues and needs

    PubMed Central

    Varadhan, Ravi; Xue, Qian-Li; Bandeen-Roche, Karen

    2015-01-01

    A semicompeting risks problem involves two-types of events: a nonterminal and a terminal event (death). Typically, the nonterminal event is the focus of the study, but the terminal event can preclude the occurrence of the nonterminal event. Semicompeting risks are ubiquitous in studies of aging. Examples of semicompeting risk dyads include: dementia and death, frailty syndrome and death, disability and death, and nursing home placement and death. Semicompeting risk models can be divided into two broad classes: models based only on observables quantities (class O) and those based on potential (latent) failure times (class L). The classical illness-death model belongs to class O. This model is a special case of the multistate models, which has been an active area of methodology development. During the past decade and a half, there has also been a flurry of methodological activity on semicompeting risks based on latent failure times (L models). These advances notwithstanding, the semi-competing risks methodology has not penetrated biomedical research, in general, and gerontological research, in particular. Some possible reasons for this lack of uptake are: the methods are relatively new and sophisticated, conceptual problems associated with potential failure time models are difficult to overcome, paucity of expository articles aimed at educating practitioners, and non-availability of readily usable software. The main goals of this review article are: (i) to describe the major types of semicompeting risks problems arising in aging research, (ii) to provide a brief survey of the semicompeting risks methods, (iii) to suggest appropriate methods for addressing the problems in aging research, (iv) to highlight areas where more work is needed, and (v) to suggest ways to facilitate the uptake of the semicompeting risks methodology by the broader biomedical research community. PMID:24729136

  11. Reduction of spatial distribution of risk factors for transportation of contaminants released by coal mining activities.

    PubMed

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-09-15

    It is reported that water-energy nexus composes two of the biggest development and human health challenges. In the present study we presented a Risk Potential Index (RPI) model which encapsulates Source, Vector (Transport), and Target risks for forecasting surface water contamination. The main aim of the model is to identify critical surface water risk zones for an open cast mining environment, taking Jharia Coalfield, India as the study area. The model also helps in feasible sampling design. Based on spatial analysis various risk zones were successfully delineated. Monthly RPI distribution revealed that the risk of surface water contamination was highest during the monsoon months. Surface water samples were analysed to validate the model. A GIS based alternative management option was proposed to reduce surface water contamination risk and observed 96% and 86% decrease in the spatial distribution of very high risk areas for the months June and July respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Model Development for Risk Assessment of Driving on Freeway under Rainy Weather Conditions

    PubMed Central

    Cai, Xiaonan; Wang, Chen; Chen, Shengdi; Lu, Jian

    2016-01-01

    Rainy weather conditions could result in significantly negative impacts on driving on freeways. However, due to lack of enough historical data and monitoring facilities, many regions are not able to establish reliable risk assessment models to identify such impacts. Given the situation, this paper provides an alternative solution where the procedure of risk assessment is developed based on drivers’ subjective questionnaire and its performance is validated by using actual crash data. First, an ordered logit model was developed, based on questionnaire data collected from Freeway G15 in China, to estimate the relationship between drivers’ perceived risk and factors, including vehicle type, rain intensity, traffic volume, and location. Then, weighted driving risk for different conditions was obtained by the model, and further divided into four levels of early warning (specified by colors) using a rank order cluster analysis. After that, a risk matrix was established to determine which warning color should be disseminated to drivers, given a specific condition. Finally, to validate the proposed procedure, actual crash data from Freeway G15 were compared with the safety prediction based on the risk matrix. The results show that the risk matrix obtained in the study is able to predict driving risk consistent with actual safety implications, under rainy weather conditions. PMID:26894434

  13. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    NASA Astrophysics Data System (ADS)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  14. Software for occupational health and safety risk analysis based on a fuzzy model.

    PubMed

    Stefanovic, Miladin; Tadic, Danijela; Djapan, Marko; Macuzic, Ivan

    2012-01-01

    Risk and safety management are very important issues in healthcare systems. Those are complex systems with many entities, hazards and uncertainties. In such an environment, it is very hard to introduce a system for evaluating and simulating significant hazards. In this paper, we analyzed different types of hazards in healthcare systems and we introduced a new fuzzy model for evaluating and ranking hazards. Finally, we presented a developed software solution, based on the suggested fuzzy model for evaluating and monitoring risk.

  15. Development of a risk assessment tool for projecting individualized probabilities of developing breast cancer for Chinese women.

    PubMed

    Wang, Yuan; Gao, Ying; Battsend, Munkhzul; Chen, Kexin; Lu, Wenli; Wang, Yaogang

    2014-11-01

    The optimal approach regarding breast cancer screening for Chinese women is unclear due to the relative low incidence rate. A risk assessment tool may be useful for selection of high-risk subsets of population for mammography screening in low-incidence and resource-limited developing country. The odd ratios for six main risk factors of breast cancer were pooled by review manager after a systematic research of literature. Health risk appraisal (HRA) model was developed to predict an individual's risk of developing breast cancer in the next 5 years from current age. The performance of this HRA model was assessed based on a first-round screening database. Estimated risk of breast cancer increased with age. Increases in the 5-year risk of developing breast cancer were found with the existence of any of included risk factors. When individuals who had risk above median risk (3.3‰) were selected from the validation database, the sensitivity is 60.0% and the specificity is 47.8%. The unweighted area under the curve (AUC) was 0.64 (95% CI = 0.50-0.78). The risk-prediction model reported in this article is based on a combination of risk factors and shows good overall predictive power, but it is still weak at predicting which particular women will develop the disease. It would be very helpful for the improvement of a current model if more population-based prospective follow-up studies were used for the validation.

  16. A Comparison of Rule-based Analysis with Regression Methods in Understanding the Risk Factors for Study Withdrawal in a Pediatric Study.

    PubMed

    Haghighi, Mona; Johnson, Suzanne Bennett; Qian, Xiaoning; Lynch, Kristian F; Vehik, Kendra; Huang, Shuai

    2016-08-26

    Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.

  17. Dynamic drought risk assessment using crop model and remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Sun, H.; Su, Z.; Lv, J.; Li, L.; Wang, Y.

    2017-02-01

    Drought risk assessment is of great significance to reduce the loss of agricultural drought and ensure food security. The normally drought risk assessment method is to evaluate its exposure to the hazard and the vulnerability to extended periods of water shortage for a specific region, which is a static evaluation method. The Dynamic Drought Risk Assessment (DDRA) is to estimate the drought risk according to the crop growth and water stress conditions in real time. In this study, a DDRA method using crop model and remote sensing techniques was proposed. The crop model we employed is DeNitrification and DeComposition (DNDC) model. The drought risk was quantified by the yield losses predicted by the crop model in a scenario-based method. The crop model was re-calibrated to improve the performance by the Leaf Area Index (LAI) retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) data. And the in-situ station-based crop model was extended to assess the regional drought risk by integrating crop planted mapping. The crop planted area was extracted with extended CPPI method from MODIS data. This study was implemented and validated on maize crop in Liaoning province, China.

  18. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  19. External validation of risk prediction models for incident colorectal cancer using UK Biobank

    PubMed Central

    Usher-Smith, J A; Harshfield, A; Saunders, C L; Sharp, S J; Emery, J; Walter, F M; Muir, K; Griffin, S J

    2018-01-01

    Background: This study aimed to compare and externally validate risk scores developed to predict incident colorectal cancer (CRC) that include variables routinely available or easily obtainable via self-completed questionnaire. Methods: External validation of fourteen risk models from a previous systematic review in 373 112 men and women within the UK Biobank cohort with 5-year follow-up, no prior history of CRC and data for incidence of CRC through linkage to national cancer registries. Results: There were 1719 (0.46%) cases of incident CRC. The performance of the risk models varied substantially. In men, the QCancer10 model and models by Tao, Driver and Ma all had an area under the receiver operating characteristic curve (AUC) between 0.67 and 0.70. Discrimination was lower in women: the QCancer10, Wells, Tao, Guesmi and Ma models were the best performing with AUCs between 0.63 and 0.66. Assessment of calibration was possible for six models in men and women. All would require country-specific recalibration if estimates of absolute risks were to be given to individuals. Conclusions: Several risk models based on easily obtainable data have relatively good discrimination in a UK population. Modelling studies are now required to estimate the potential health benefits and cost-effectiveness of implementing stratified risk-based CRC screening. PMID:29381683

  20. Intelligent judgements over health risks in a spatial agent-based model.

    PubMed

    Abdulkareem, Shaheen A; Augustijn, Ellen-Wien; Mustafa, Yaseen T; Filatova, Tatiana

    2018-03-20

    Millions of people worldwide are exposed to deadly infectious diseases on a regular basis. Breaking news of the Zika outbreak for instance, made it to the main media titles internationally. Perceiving disease risks motivate people to adapt their behavior toward a safer and more protective lifestyle. Computational science is instrumental in exploring patterns of disease spread emerging from many individual decisions and interactions among agents and their environment by means of agent-based models. Yet, current disease models rarely consider simulating dynamics in risk perception and its impact on the adaptive protective behavior. Social sciences offer insights into individual risk perception and corresponding protective actions, while machine learning provides algorithms and methods to capture these learning processes. This article presents an innovative approach to extend agent-based disease models by capturing behavioral aspects of decision-making in a risky context using machine learning techniques. We illustrate it with a case of cholera in Kumasi, Ghana, accounting for spatial and social risk factors that affect intelligent behavior and corresponding disease incidents. The results of computational experiments comparing intelligent with zero-intelligent representations of agents in a spatial disease agent-based model are discussed. We present a spatial disease agent-based model (ABM) with agents' behavior grounded in Protection Motivation Theory. Spatial and temporal patterns of disease diffusion among zero-intelligent agents are compared to those produced by a population of intelligent agents. Two Bayesian Networks (BNs) designed and coded using R and are further integrated with the NetLogo-based Cholera ABM. The first is a one-tier BN1 (only risk perception), the second is a two-tier BN2 (risk and coping behavior). We run three experiments (zero-intelligent agents, BN1 intelligence and BN2 intelligence) and report the results per experiment in terms of several macro metrics of interest: an epidemic curve, a risk perception curve, and a distribution of different types of coping strategies over time. Our results emphasize the importance of integrating behavioral aspects of decision making under risk into spatial disease ABMs using machine learning algorithms. This is especially relevant when studying cumulative impacts of behavioral changes and possible intervention strategies.

  1. [The application of two occupation health risk assessment models in a wooden furniture manufacturing industry].

    PubMed

    Wang, A H; Leng, P B; Bian, G L; Li, X H; Mao, G C; Zhang, M B

    2016-10-20

    Objective: To explore the applicability of 2 different models of occupational health risk assessment in wooden furniture manufacturing industry. Methods: American EPA inhalation risk model and ICMM model of occupational health risk assessment were conducted to assess occupational health risk in a small wooden furniture enterprises, respectively. Results: There was poor protective measure and equipment of occupational disease in the plant. The concentration of wood dust in the air of two workshops was over occupational exposure limit (OEL) , and the C TWA was 8.9 mg/m 3 and 3.6 mg/m 3 , respectively. According to EPA model, the workers who exposed to benzene in this plant had high risk (9.7×10 -6 ~34.3×10 -6 ) of leukemia, and who exposed to formaldehyde had high risk (11.4 × 10 -6 ) of squamous cell carcinoma. There were inconsistent evaluation results using the ICMM tools of standard-based matrix and calculated risk rating. There were very high risks to be attacked by rhinocarcinoma of the workers who exposed to wood dust for the tool of calculated risk rating, while high risk for the tool of standard-based matrix. For the workers who exposed to noise, risk of noise-induced deafness was unacceptable and medium risk using two tools, respectively. Conclusion: Both EPA model and ICMM model can appropriately predict and assessthe occupational health risk in wooden furniture manufactory, ICMM due to the relatively simple operation, easy evaluation parameters, assessment of occupational - disease - inductive factors comprehensively, and more suitable for wooden furniture production enterprise.

  2. Ensemble Flow Forecasts for Risk Based Reservoir Operations of Lake Mendocino in Mendocino County, California

    NASA Astrophysics Data System (ADS)

    Delaney, C.; Hartman, R. K.; Mendoza, J.; Evans, K. M.; Evett, S.

    2016-12-01

    Forecast informed reservoir operations (FIRO) is a methodology that incorporates short to mid-range precipitation or flow forecasts to inform the flood operations of reservoirs. Previous research and modeling for flood control reservoirs has shown that FIRO can reduce flood risk and increase water supply for many reservoirs. The risk-based method of FIRO presents a unique approach that incorporates flow forecasts made by NOAA's California-Nevada River Forecast Center (CNRFC) to model and assess risk of meeting or exceeding identified management targets or thresholds. Forecasted risk is evaluated against set risk tolerances to set reservoir flood releases. A water management model was developed for Lake Mendocino, a 116,500 acre-foot reservoir located near Ukiah, California. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United State Army Corps of Engineers and is operated by the Sonoma County Water Agency for water supply. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has been plagued with water supply reliability issues since 2007. FIRO is applied to Lake Mendocino by simulating daily hydrologic conditions from 1985 to 2010 in the Upper Russian River from Lake Mendocino to the City of Healdsburg approximately 50 miles downstream. The risk-based method is simulated using a 15-day, 61 member streamflow hindcast by the CNRFC. Model simulation results of risk-based flood operations demonstrate a 23% increase in average end of water year (September 30) storage levels over current operations. Model results show no increase in occurrence of flood damages for points downstream of Lake Mendocino. This investigation demonstrates that FIRO may be a viable flood control operations approach for Lake Mendocino and warrants further investigation through additional modeling and analysis.

  3. Effect of risk-based payment model on caries inequalities in preschool children assessed by geo-mapping.

    PubMed

    Holmén, Anders; Strömberg, Ulf; Håkansson, Gunnel; Twetman, Svante

    2018-01-05

    To describe, with aid of geo-mapping, the effects of a risk-based capitation model linked to caries-preventive guidelines on the polarization of caries in preschool children living in the Halland region of Sweden. The new capitation model was implemented in 2013 in which more money was allocated to Public Dental Clinics surrounded by administrative parishes inhabited by children with increased caries risk, while a reduced capitation was allocated to those clinics with a low burden of high risk children. Regional geo-maps of caries risk based on caries prevalence, level of education and the families purchasing power were produced for 3-6-year-old children in 2010 (n = 10,583) and 2016 (n = 7574). Newly migrated children to the region (n = 344 in 2010 and n = 522 in 2016) were analyzed separately. A regional caries polarization index was calculated as the ratio between the maximum and minimum estimates of caries frequency on parish-level, based on a Bayesian hierarchical mapping model. Overall, the total caries prevalence (dmfs > 0) remained unchanged from 2010 (10.6%) to 2016 (10.5%). However, the polarization index decreased from 7.0 in 2010 to 5.6 in 2016. Newly arrived children born outside Sweden had around four times higher caries prevalence than their Swedish-born peers. A risk-based capitation model could reduce the socio-economic inequalities in dental caries among preschool children living in Sweden. Although updated evidence-based caries-preventive guidelines were released, the total prevalence of caries on dentin surface level was unaffected 4 years after the implementation.

  4. Prototype Biology-Based Radiation Risk Module Project

    NASA Technical Reports Server (NTRS)

    Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice

    2015-01-01

    Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.

  5. Model of areas for identifying risks influencing the compliance of technological processes and products

    NASA Astrophysics Data System (ADS)

    Misztal, A.; Belu, N.

    2016-08-01

    Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..

  6. Characterizing Uncertainty and Variability in PBPK Models: State of the Science and Needs for Research and Implementation

    EPA Science Inventory

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variabilit...

  7. Incorporating biologically based models into assessments of risk from chemical contaminants

    NASA Technical Reports Server (NTRS)

    Bull, R. J.; Conolly, R. B.; De Marini, D. M.; MacPhail, R. C.; Ohanian, E. V.; Swenberg, J. A.

    1993-01-01

    The general approach to assessment of risk from chemical contaminants in drinking water involves three steps: hazard identification, exposure assessment, and dose-response assessment. Traditionally, the risks to humans associated with different levels of a chemical have been derived from the toxic responses observed in animals. It is becoming increasingly clear, however, that further information is needed if risks to humans are to be assessed accurately. Biologically based models help clarify the dose-response relationship and reduce uncertainty.

  8. Quantitative analysis of factors that affect oil pipeline network accident based on Bayesian networks: A case study in China

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan

    2018-06-01

    Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.

  9. A global airport-based risk model for the spread of dengue infection via the air transport network.

    PubMed

    Gardner, Lauren; Sarkar, Sahotra

    2013-01-01

    The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases.

  10. A Global Airport-Based Risk Model for the Spread of Dengue Infection via the Air Transport Network

    PubMed Central

    Gardner, Lauren; Sarkar, Sahotra

    2013-01-01

    The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases. PMID:24009672

  11. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  12. Is risk analysis scientific?

    PubMed

    Hansson, Sven Ove; Aven, Terje

    2014-07-01

    This article discusses to what extent risk analysis is scientific in view of a set of commonly used definitions and criteria. We consider scientific knowledge to be characterized by its subject matter, its success in developing the best available knowledge in its fields of study, and the epistemic norms and values that guide scientific investigations. We proceed to assess the field of risk analysis according to these criteria. For this purpose, we use a model for risk analysis in which science is used as a base for decision making on risks, which covers the five elements evidence, knowledge base, broad risk evaluation, managerial review and judgment, and the decision; and that relates these elements to the domains experts and decisionmakers, and to the domains fact-based or value-based. We conclude that risk analysis is a scientific field of study, when understood as consisting primarily of (i) knowledge about risk-related phenomena, processes, events, etc., and (ii) concepts, theories, frameworks, approaches, principles, methods and models to understand, assess, characterize, communicate, and manage risk, in general and for specific applications (the instrumental part). © 2014 Society for Risk Analysis.

  13. In situ remediation-released zero-valent iron nanoparticles impair soil ecosystems health: A C. elegans biomarker-based risk assessment.

    PubMed

    Yang, Ying-Fei; Cheng, Yi-Hsien; Liao, Chung-Min

    2016-11-05

    There is considerable concern over the potential ecotoxicity to soil ecosystems posed by zero-valent iron nanoparticles (Fe(0) NPs) released from in situ environmental remediation. However, a lack of quantitative risk assessment has hampered the development of appropriate testing methods used in environmental applications. Here we present a novel, empirical approach to assess Fe(0) NPs-associated soil ecosystems health risk using the nematode Caenorhabditis elegans as a model organism. A Hill-based dose-response model describing the concentration-fertility inhibition relationships was constructed. A Weibull model was used to estimate thresholds as a guideline to protect C. elegans from infertility when exposed to waterborne or foodborne Fe(0) NPs. Finally, the risk metrics, exceedance risk (ER) and risk quotient (RQ) of Fe(0) NPs in various depths and distances from remediation sites can then be predicted. We showed that under 50% risk probability (ER=0.5), upper soil layer had the highest infertility risk (95% confidence interval: 13.18-57.40%). The margins of safety and acceptable criteria for soil ecosystems health for using Fe(0) NPs in field scale applications were also recommended. Results showed that RQs are larger than 1 in all soil layers when setting a stricter threshold of ∼1.02mgL(-1) of Fe(0) NPs. This C. elegans biomarker-based risk model affords new insights into the links between widespread use of Fe(0) NPs and environmental risk assessment and offers potential environmental implications of metal-based NPs for in situ remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    EPA Science Inventory

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  15. Projections of preventable risks for cardiovascular disease in Canada to 2021: a microsimulation modelling approach

    PubMed Central

    Manuel, Douglas G.; Tuna, Meltem; Hennessy, Deirdre; Okhmatovskaia, Anya; Finès, Philippe; Tanuseputro, Peter; Tu, Jack V.; Flanagan, William

    2014-01-01

    Background Reductions in preventable risks associated with cardiovascular disease have contributed to a steady decrease in its incidence over the past 50 years in most developed countries. However, it is unclear whether this trend will continue. Our objective was to examine future risk by projecting trends in preventable risk factors in Canada to 2021. Methods We created a population-based microsimulation model using national data on births, deaths and migration; socioeconomic data; cardiovascular disease risk factors; and algorithms for changes in these risk factors (based on sociodemographic characteristics and previous cardiovascular disease risk). An initial population of 22.5 million people, representing the Canadian adult population in 2001, had 13 characteristics including the risk factors used in clinical risk prediction. There were 6.1 million potential exposure profiles for each person each year. Outcome measures included annual prevalence of risk factors (smoking, obesity, diabetes, hypertension and lipid levels) and of co-occurring risks. Results From 2003 to 2009, the projected risks of cardiovascular disease based on the microsimulation model closely approximated those based on national surveys. Except for obesity and diabetes, all risk factors were projected to decrease through to 2021. The largest projected decreases were for the prevalence of smoking (from 25.7% in 2001 to 17.7% in 2021) and uncontrolled hypertension (from 16.1% to 10.8%). Between 2015 and 2017, obesity was projected to surpass smoking as the most prevalent risk factor. Interpretation Risks of cardiovascular disease are projected to decrease modestly in Canada, leading to a likely continuing decline in its incidence. PMID:25077135

  16. Integrating Household Risk Mitigation Behavior in Flood Risk Analysis: An Agent-Based Model Approach.

    PubMed

    Haer, Toon; Botzen, W J Wouter; de Moel, Hans; Aerts, Jeroen C J H

    2017-10-01

    Recent studies showed that climate change and socioeconomic trends are expected to increase flood risks in many regions. However, in these studies, human behavior is commonly assumed to be constant, which neglects interaction and feedback loops between human and environmental systems. This neglect of human adaptation leads to a misrepresentation of flood risk. This article presents an agent-based model that incorporates human decision making in flood risk analysis. In particular, household investments in loss-reducing measures are examined under three economic decision models: (1) expected utility theory, which is the traditional economic model of rational agents; (2) prospect theory, which takes account of bounded rationality; and (3) a prospect theory model, which accounts for changing risk perceptions and social interactions through a process of Bayesian updating. We show that neglecting human behavior in flood risk assessment studies can result in a considerable misestimation of future flood risk, which is in our case study an overestimation of a factor two. Furthermore, we show how behavior models can support flood risk analysis under different behavioral assumptions, illustrating the need to include the dynamic adaptive human behavior of, for instance, households, insurers, and governments. The method presented here provides a solid basis for exploring human behavior and the resulting flood risk with respect to low-probability/high-impact risks. © 2016 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  17. Stochastic Watershed Models for Risk Based Decision Making

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  18. Predictive Modeling of Risk Factors and Complications of Cataract Surgery

    PubMed Central

    Gaskin, Gregory L; Pershing, Suzann; Cole, Tyler S; Shah, Nigam H

    2016-01-01

    Purpose To quantify the relationship between aggregated preoperative risk factors and cataract surgery complications, as well as to build a model predicting outcomes on an individual-level—given a constellation of demographic, baseline, preoperative, and intraoperative patient characteristics. Setting Stanford Hospital and Clinics between 1994 and 2013. Design Retrospective cohort study Methods Patients age 40 or older who received cataract surgery between 1994 and 2013. Risk factors, complications, and demographic information were extracted from the Electronic Health Record (EHR), based on International Classification of Diseases, 9th edition (ICD-9) codes, Current Procedural Terminology (CPT) codes, drug prescription information, and text data mining using natural language processing. We used a bootstrapped least absolute shrinkage and selection operator (LASSO) model to identify highly-predictive variables. We built random forest classifiers for each complication to create predictive models. Results Our data corroborated existing literature on postoperative complications—including the association of intraoperative complications, complex cataract surgery, black race, and/or prior eye surgery with an increased risk of any postoperative complications. We also found a number of other, less well-described risk factors, including systemic diabetes mellitus, young age (<60 years old), and hyperopia as risk factors for complex cataract surgery and intra- and post-operative complications. Our predictive models based on aggregated outperformed existing published models. Conclusions The constellations of risk factors and complications described here can guide new avenues of research and provide specific, personalized risk assessment for a patient considering cataract surgery. The predictive capacity of our models can enable risk stratification of patients, which has utility as a teaching tool as well as informing quality/value-based reimbursements. PMID:26692059

  19. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  20. Value-based formulas for purchasing. PEHP's designated service provider program: value-based purchasing through global fees.

    PubMed

    Emery, D W

    1997-01-01

    In many circles, managed care and capitation have become synonymous; unfortunately, the assumptions informing capitation are based on a flawed unidimensional model of risk. PEHP of Utah has rejected the unidimensional model and has therefore embraced a multidimensional model of risk that suggests that global fees are the optimal purchasing modality. A globally priced episode of care forms a natural unit of analysis that enhances purchasing clarity, allows providers to more efficiently focus on the Marginal Rate of Technical Substitution, and conforms to the multidimensional reality of risk. Most importantly, global fees simultaneously maximize patient choice and provider cost consciousness.

  1. Rise and Shock: Optimal Defibrillator Placement in a High-rise Building.

    PubMed

    Chan, Timothy C Y

    2017-01-01

    Out-of-hospital cardiac arrests (OHCA) in high-rise buildings experience lower survival and longer delays until paramedic arrival. Use of publicly accessible automated external defibrillators (AED) can improve survival, but "vertical" placement has not been studied. We aim to determine whether elevator-based or lobby-based AED placement results in shorter vertical distance travelled ("response distance") to OHCAs in a high-rise building. We developed a model of a single-elevator, n-floor high-rise building. We calculated and compared the average distance from AED to floor of arrest for the two AED locations. We modeled OHCA occurrences using floor-specific Poisson processes, the risk of OHCA on the ground floor (λ 1 ) and the risk on any above-ground floor (λ). The elevator was modeled with an override function enabling direct travel to the target floor. The elevator location upon override was modeled as a discrete uniform random variable. Calculations used the laws of probability. Elevator-based AED placement had shorter average response distance if the number of floors (n) in the building exceeded three quarters of the ratio of ground-floor OHCA risk to above-ground floor risk (λ 1 /λ) plus one half (n ≥ 3λ 1 /4λ + 0.5). Otherwise, a lobby-based AED had shorter average response distance. If OHCA risk on each floor was equal, an elevator-based AED had shorter average response distance. Elevator-based AEDs travel less vertical distance to OHCAs in tall buildings or those with uniform vertical risk, while lobby-based AEDs travel less vertical distance in buildings with substantial lobby, underground, and nearby street-level traffic and OHCA risk.

  2. Risk evaluation of highway engineering project based on the fuzzy-AHP

    NASA Astrophysics Data System (ADS)

    Yang, Qian; Wei, Yajun

    2011-10-01

    Engineering projects are social activities, which integrate with technology, economy, management and organization. There are uncertainties in each respect of engineering projects, and it needs to strengthen risk management urgently. Based on the analysis of the characteristics of highway engineering, and the study of the basic theory on risk evaluation, the paper built an index system of highway project risk evaluation. Besides based on fuzzy mathematics principle, analytical hierarchy process was used and as a result, the model of the comprehensive appraisal method of fuzzy and AHP was set up for the risk evaluation of express way concessionary project. The validity and the practicability of the risk evaluation of expressway concessionary project were verified after the model was applied to the practice of a project.

  3. Risk-based decision making for terrorism applications.

    PubMed

    Dillon, Robin L; Liebe, Robert M; Bestafka, Thomas

    2009-03-01

    This article describes the anti-terrorism risk-based decision aid (ARDA), a risk-based decision-making approach for prioritizing anti-terrorism measures. The ARDA model was developed as part of a larger effort to assess investments for protecting U.S. Navy assets at risk and determine whether the most effective anti-terrorism alternatives are being used to reduce the risk to the facilities and war-fighting assets. With ARDA and some support from subject matter experts, we examine thousands of scenarios composed of 15 attack modes against 160 facility types on two installations and hundreds of portfolios of 22 mitigation alternatives. ARDA uses multiattribute utility theory to solve some of the commonly identified challenges in security risk analysis. This article describes the process and documents lessons learned from applying the ARDA model for this application.

  4. Health-based risk adjustment: improving the pharmacy-based cost group model by adding diagnostic cost groups.

    PubMed

    Prinsze, Femmeke J; van Vliet, René C J A

    Since 1991, risk-adjusted premium subsidies have existed in the Dutch social health insurance sector, which covered about two-thirds of the population until 2006. In 2002, pharmacy-based cost groups (PCGs) were included in the demographic risk adjustment model, which improved the goodness-of-fit, as measured by the R2, to 11.5%. The model's R2 reached 22.8% in 2004, when inpatient diagnostic information was added in the form of diagnostic cost groups (DCGs). PCGs and DCGs appear to be complementary in their ability to predict future costs. PCGs particularly improve the R2 for outpatient expenses, whereas DCGs improve the R2 for inpatient expenses. In 2006, this system of risk-adjusted premium subsidies was extended to cover the entire population.

  5. Capturing ecology in modeling approaches applied to environmental risk assessment of endocrine active chemicals in fish.

    PubMed

    Mintram, Kate S; Brown, A Ross; Maynard, Samuel K; Thorbek, Pernille; Tyler, Charles R

    2018-02-01

    Endocrine active chemicals (EACs) are widespread in freshwater environments and both laboratory and field based studies have shown reproductive effects in fish at environmentally relevant exposures. Environmental risk assessment (ERA) seeks to protect wildlife populations and prospective assessments rely on extrapolation from individual-level effects established for laboratory fish species to populations of wild fish using arbitrary safety factors. Population susceptibility to chemical effects, however, depends on exposure risk, physiological susceptibility, and population resilience, each of which can differ widely between fish species. Population models have significant potential to address these shortfalls and to include individual variability relating to life-history traits, demographic and density-dependent vital rates, and behaviors which arise from inter-organism and organism-environment interactions. Confidence in population models has recently resulted in the EU Commission stating that results derived from reliable models may be considered when assessing the relevance of adverse effects of EACs at the population level. This review critically assesses the potential risks posed by EACs for fish populations, considers the ecological factors influencing these risks and explores the benefits and challenges of applying population modeling (including individual-based modeling) in ERA for EACs in fish. We conclude that population modeling offers a way forward for incorporating greater environmental relevance in assessing the risks of EACs for fishes and for identifying key risk factors through sensitivity analysis. Individual-based models (IBMs) allow for the incorporation of physiological and behavioral endpoints relevant to EAC exposure effects, thus capturing both direct and indirect population-level effects.

  6. Integrating Professional and Folk Models of HIV Risk: YMSM's Perceptions of High-Risk Sex

    ERIC Educational Resources Information Center

    Kubicek, Katrina; Carpineto, Julie; McDavitt, Bryce; Weiss, George; Iverson, Ellen F.; Au, Chi-Wai; Kerrone, Dustin; Martinez, Miguel; Kipke, Michele D.

    2008-01-01

    Risks associated with HIV are well documented in research literature. Although a great deal has been written about high-risk sex, little research has been conducted to examine how young men who have sex with men (YMSM) perceive and define high-risk sexual behavior. In this study, we compare the "professional" and "folk" models of HIV risk based on…

  7. Functional correlation approach to operational risk in banking organizations

    NASA Astrophysics Data System (ADS)

    Kühn, Reimer; Neu, Peter

    2003-05-01

    A Value-at-Risk-based model is proposed to compute the adequate equity capital necessary to cover potential losses due to operational risks, such as human and system process failures, in banking organizations. Exploring the analogy to a lattice gas model from physics, correlations between sequential failures are modeled by as functionally defined, heterogeneous couplings between mutually supportive processes. In contrast to traditional risk models for market and credit risk, where correlations are described as equal-time-correlations by a covariance matrix, the dynamics of the model shows collective phenomena such as bursts and avalanches of process failures.

  8. Assessment of credit risk based on fuzzy relations

    NASA Astrophysics Data System (ADS)

    Tsabadze, Teimuraz

    2017-06-01

    The purpose of this paper is to develop a new approach for an assessment of the credit risk to corporate borrowers. There are different models for borrowers' risk assessment. These models are divided into two groups: statistical and theoretical. When assessing the credit risk for corporate borrowers, statistical model is unacceptable due to the lack of sufficiently large history of defaults. At the same time, we cannot use some theoretical models due to the lack of stock exchange. In those cases, when studying a particular borrower given that statistical base does not exist, the decision-making process is always of expert nature. The paper describes a new approach that may be used in group decision-making. An example of the application of the proposed approach is given.

  9. An integrated simulation and optimization approach for managing human health risks of atmospheric pollutants by coal-fired power plants.

    PubMed

    Dai, C; Cai, X H; Cai, Y P; Guo, H C; Sun, W; Tan, Q; Huang, G H

    2014-06-01

    This research developed a simulation-aided nonlinear programming model (SNPM). This model incorporated the consideration of pollutant dispersion modeling, and the management of coal blending and the related human health risks within a general modeling framework In SNPM, the simulation effort (i.e., California puff [CALPUFF]) was used to forecast the fate of air pollutants for quantifying the health risk under various conditions, while the optimization studies were to identify the optimal coal blending strategies from a number of alternatives. To solve the model, a surrogate-based indirect search approach was proposed, where the support vector regression (SVR) was used to create a set of easy-to-use and rapid-response surrogates for identifying the function relationships between coal-blending operating conditions and health risks. Through replacing the CALPUFF and the corresponding hazard quotient equation with the surrogates, the computation efficiency could be improved. The developed SNPM was applied to minimize the human health risk associated with air pollutants discharged from Gaojing and Shijingshan power plants in the west of Beijing. Solution results indicated that it could be used for reducing the health risk of the public in the vicinity of the two power plants, identifying desired coal blending strategies for decision makers, and considering a proper balance between coal purchase cost and human health risk. A simulation-aided nonlinear programming model (SNPM) is developed. It integrates the advantages of CALPUFF and nonlinear programming model. To solve the model, a surrogate-based indirect search approach based on the combination of support vector regression and genetic algorithm is proposed. SNPM is applied to reduce the health risk caused by air pollutants discharged from Gaojing and Shijingshan power plants in the west of Beijing. Solution results indicate that it is useful for generating coal blending schemes, reducing the health risk of the public, reflecting the trade-offbetween coal purchase cost and health risk.

  10. An Agent-Based Model of Evolving Community Flood Risk.

    PubMed

    Tonn, Gina L; Guikema, Seth D

    2018-06-01

    Although individual behavior plays a major role in community flood risk, traditional flood risk models generally do not capture information on how community policies and individual decisions impact the evolution of flood risk over time. The purpose of this study is to improve the understanding of the temporal aspects of flood risk through a combined analysis of the behavioral, engineering, and physical hazard aspects of flood risk. Additionally, the study aims to develop a new modeling approach for integrating behavior, policy, flood hazards, and engineering interventions. An agent-based model (ABM) is used to analyze the influence of flood protection measures, individual behavior, and the occurrence of floods and near-miss flood events on community flood risk. The ABM focuses on the following decisions and behaviors: dissemination of flood management information, installation of community flood protection, elevation of household mechanical equipment, and elevation of homes. The approach is place based, with a case study area in Fargo, North Dakota, but is focused on generalizable insights. Generally, community mitigation results in reduced future damage, and individual action, including mitigation and movement into and out of high-risk areas, can have a significant influence on community flood risk. The results of this study provide useful insights into the interplay between individual and community actions and how it affects the evolution of flood risk. This study lends insight into priorities for future work, including the development of more in-depth behavioral and decision rules at the individual and community level. © 2017 Society for Risk Analysis.

  11. Prediction and Informative Risk Factor Selection of Bone Diseases.

    PubMed

    Li, Hui; Li, Xiaoyi; Ramanathan, Murali; Zhang, Aidong

    2015-01-01

    With the booming of healthcare industry and the overwhelming amount of electronic health records (EHRs) shared by healthcare institutions and practitioners, we take advantage of EHR data to develop an effective disease risk management model that not only models the progression of the disease, but also predicts the risk of the disease for early disease control or prevention. Existing models for answering these questions usually fall into two categories: the expert knowledge based model or the handcrafted feature set based model. To fully utilize the whole EHR data, we will build a framework to construct an integrated representation of features from all available risk factors in the EHR data and use these integrated features to effectively predict osteoporosis and bone fractures. We will also develop a framework for informative risk factor selection of bone diseases. A pair of models for two contrast cohorts (e.g., diseased patients versus non-diseased patients) will be established to discriminate their characteristics and find the most informative risk factors. Several empirical results on a real bone disease data set show that the proposed framework can successfully predict bone diseases and select informative risk factors that are beneficial and useful to guide clinical decisions.

  12. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.

  13. INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...

  14. Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment (Final Report)

    EPA Science Inventory

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluati...

  15. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  16. A decision model to estimate a risk threshold for venous thromboembolism prophylaxis in hospitalized medical patients.

    PubMed

    Le, P; Martinez, K A; Pappas, M A; Rothberg, M B

    2017-06-01

    Essentials Low risk patients don't require venous thromboembolism (VTE) prophylaxis; low risk is unquantified. We used a Markov model to estimate the risk threshold for VTE prophylaxis in medical inpatients. Prophylaxis was cost-effective for an average medical patient with a VTE risk of ≥ 1.0%. VTE prophylaxis can be personalized based on patient risk and age/life expectancy. Background Venous thromboembolism (VTE) is a common preventable condition in medical inpatients. Thromboprophylaxis is recommended for inpatients who are not at low risk of VTE, but no specific risk threshold for prophylaxis has been defined. Objective To determine a threshold for prophylaxis based on risk of VTE. Patients/Methods We constructed a decision model with a decision-tree following patients for 3 months after hospitalization, and a lifetime Markov model with 3-month cycles. The model tracked symptomatic deep vein thromboses and pulmonary emboli, bleeding events and heparin-induced thrombocytopenia. Long-term complications included recurrent VTE, post-thrombotic syndrome and pulmonary hypertension. For the base case, we considered medical inpatients aged 66 years, having a life expectancy of 13.5 years, VTE risk of 1.4% and bleeding risk of 2.7%. Patients received enoxaparin 40 mg day -1 for prophylaxis. Results Assuming a willingness-to-pay (WTP) threshold of $100 000/ quality-adjusted life year (QALY), prophylaxis was indicated for an average medical inpatient with a VTE risk of ≥ 1.0% up to 3 months after hospitalization. For the average patient, prophylaxis was not indicated when the bleeding risk was > 8.1%, the patient's age was > 73.4 years or the cost of enoxaparin exceeded $60/dose. If VTE risk was < 0.26% or bleeding risk was > 19%, the risks of prophylaxis outweighed benefits. The prophylaxis threshold was relatively insensitive to low-molecular-weight heparin cost and bleeding risk, but very sensitive to patient age and life expectancy. Conclusions The decision to offer prophylaxis should be personalized based on patient VTE risk, age and life expectancy. At a WTP of $100 000/QALY, prophylaxis is not warranted for most patients with a 3-month VTE risk below 1.0%. © 2017 International Society on Thrombosis and Haemostasis.

  17. Epidemiology and Long-term Clinical and Biologic Risk Factors for Pneumonia in Community-Dwelling Older Americans

    PubMed Central

    Alvarez, Karina; Loehr, Laura; Folsom, Aaron R.; Newman, Anne B.; Weissfeld, Lisa A.; Wunderink, Richard G.; Kritchevsky, Stephen B.; Mukamal, Kenneth J.; London, Stephanie J.; Harris, Tamara B.; Bauer, Doug C.; Angus, Derek C.

    2013-01-01

    Background: Preventing pneumonia requires better understanding of incidence, mortality, and long-term clinical and biologic risk factors, particularly in younger individuals. Methods: This was a cohort study in three population-based cohorts of community-dwelling individuals. A derivation cohort (n = 16,260) was used to determine incidence and survival and develop a risk prediction model. The prediction model was validated in two cohorts (n = 8,495). The primary outcome was 10-year risk of pneumonia hospitalization. Results: The crude and age-adjusted incidences of pneumonia were 6.71 and 9.43 cases/1,000 person-years (10-year risk was 6.15%). The 30-day and 1-year mortality were 16.5% and 31.5%. Although age was the most important risk factor (range of crude incidence rates, 1.69-39.13 cases/1,000 person-years for each 5-year increment from 45-85 years), 38% of pneumonia cases occurred in adults < 65 years of age. The 30-day and 1-year mortality were 12.5% and 25.7% in those < 65 years of age. Although most comorbidities were associated with higher risk of pneumonia, reduced lung function was the most important risk factor (relative risk = 6.61 for severe reduction based on FEV1 by spirometry). A clinical risk prediction model based on age, smoking, and lung function predicted 10-year risk (area under curve [AUC] = 0.77 and Hosmer-Lemeshow [HL] C statistic = 0.12). Model discrimination and calibration were similar in the internal validation cohort (AUC = 0.77; HL C statistic, 0.65) but lower in the external validation cohort (AUC = 0.62; HL C statistic, 0.45). The model also calibrated well in blacks and younger adults. C-reactive protein and IL-6 were associated with higher pneumonia risk but did not improve model performance. Conclusions: Pneumonia hospitalization is common and associated with high mortality, even in younger healthy adults. Long-term risk of pneumonia can be predicted in community-dwelling adults with a simple clinical risk prediction model. PMID:23744106

  18. Research Review: Two Pathways toward impulsive action: an integrative risk model for bulimic behavior in youth

    PubMed Central

    Pearson, Carolyn M.; Riley, Elizabeth N.; Davis, Heather A.; Smith, Gregory T.

    2014-01-01

    Background This paper provides an integrative review of existing risk factors and models for bulimia nervosa (BN) in young girls. We offer a new model for BN that describes two pathways of risk that may lead to the initial impulsive act of binge eating and purging in children and adolescents. Scope We conducted a selective literature review, focusing on existing and new risk processes for BN in this select population. Findings We identify two ways in which girls increase their risk to begin engaging in the impulsive behavior of binge eating and purging. The first is state based: the experience of negative mood, in girls attempting to restrain eating, leads to the depletion of self-control and thus increased risk for loss of control eating. The second is personality-based: elevations on the trait of negative urgency, or the tendency to act rashly when distressed, increase risk, particularly in conjunction with high-risk psychosocial learning. We then briefly discuss how these behaviors are reinforced, putting girls at further risk for developing BN. Conclusions We highlight several areas in which further inquiry is necessary, and we discuss the clinical implications of the new risk model we described. PMID:24673546

  19. Malaria Disease Mapping in Malaysia based on Besag-York-Mollie (BYM) Model

    NASA Astrophysics Data System (ADS)

    Azah Samat, Nor; Mey, Liew Wan

    2017-09-01

    Disease mapping is the visual representation of the geographical distribution which give an overview info about the incidence of disease within a population through spatial epidemiology data. Based on the result of map, it helps in monitoring and planning resource needs at all levels of health care and designing appropriate interventions, tailored towards areas that deserve closer scrutiny or communities that lead to further investigations to identify important risk factors. Therefore, the choice of statistical model used for relative risk estimation is important because production of disease risk map relies on the model used. This paper proposes Besag-York-Mollie (BYM) model to estimate the relative risk for Malaria in Malaysia. The analysis involved using the number of Malaria cases that obtained from the Ministry of Health Malaysia. The outcomes of analysis are displayed through graph and map, including Malaria disease risk map that constructed according to the estimation of relative risk. The distribution of high and low risk areas of Malaria disease occurrences for all states in Malaysia can be identified in the risk map.

  20. A Probabilistic Model for Hydrokinetic Turbine Collision Risks: Exploring Impacts on Fish

    PubMed Central

    Hammar, Linus; Eggertsen, Linda; Andersson, Sandra; Ehnberg, Jimmy; Arvidsson, Rickard; Gullström, Martin; Molander, Sverker

    2015-01-01

    A variety of hydrokinetic turbines are currently under development for power generation in rivers, tidal straits and ocean currents. Because some of these turbines are large, with rapidly moving rotor blades, the risk of collision with aquatic animals has been brought to attention. The behavior and fate of animals that approach such large hydrokinetic turbines have not yet been monitored at any detail. In this paper, we conduct a synthesis of the current knowledge and understanding of hydrokinetic turbine collision risks. The outcome is a generic fault tree based probabilistic model suitable for estimating population-level ecological risks. New video-based data on fish behavior in strong currents are provided and models describing fish avoidance behaviors are presented. The findings indicate low risk for small-sized fish. However, at large turbines (≥5 m), bigger fish seem to have high probability of collision, mostly because rotor detection and avoidance is difficult in low visibility. Risks can therefore be substantial for vulnerable populations of large-sized fish, which thrive in strong currents. The suggested collision risk model can be applied to different turbine designs and at a variety of locations as basis for case-specific risk assessments. The structure of the model facilitates successive model validation, refinement and application to other organism groups such as marine mammals. PMID:25730314

  1. A probabilistic model for hydrokinetic turbine collision risks: exploring impacts on fish.

    PubMed

    Hammar, Linus; Eggertsen, Linda; Andersson, Sandra; Ehnberg, Jimmy; Arvidsson, Rickard; Gullström, Martin; Molander, Sverker

    2015-01-01

    A variety of hydrokinetic turbines are currently under development for power generation in rivers, tidal straits and ocean currents. Because some of these turbines are large, with rapidly moving rotor blades, the risk of collision with aquatic animals has been brought to attention. The behavior and fate of animals that approach such large hydrokinetic turbines have not yet been monitored at any detail. In this paper, we conduct a synthesis of the current knowledge and understanding of hydrokinetic turbine collision risks. The outcome is a generic fault tree based probabilistic model suitable for estimating population-level ecological risks. New video-based data on fish behavior in strong currents are provided and models describing fish avoidance behaviors are presented. The findings indicate low risk for small-sized fish. However, at large turbines (≥5 m), bigger fish seem to have high probability of collision, mostly because rotor detection and avoidance is difficult in low visibility. Risks can therefore be substantial for vulnerable populations of large-sized fish, which thrive in strong currents. The suggested collision risk model can be applied to different turbine designs and at a variety of locations as basis for case-specific risk assessments. The structure of the model facilitates successive model validation, refinement and application to other organism groups such as marine mammals.

  2. Stackelberg Game of Buyback Policy in Supply Chain with a Risk-Averse Retailer and a Risk-Averse Supplier Based on CVaR

    PubMed Central

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605

  3. Stackelberg game of buyback policy in supply chain with a risk-averse retailer and a risk-averse supplier based on CVaR.

    PubMed

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions.

  4. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  5. Train integrity detection risk analysis based on PRISM

    NASA Astrophysics Data System (ADS)

    Wen, Yuan

    2018-04-01

    GNSS based Train Integrity Monitoring System (TIMS) is an effective and low-cost detection scheme for train integrity detection. However, as an external auxiliary system of CTCS, GNSS may be influenced by external environments, such as uncertainty of wireless communication channels, which may lead to the failure of communication and positioning. In order to guarantee the reliability and safety of train operation, a risk analysis method of train integrity detection based on PRISM is proposed in this article. First, we analyze the risk factors (in GNSS communication process and the on-board communication process) and model them. Then, we evaluate the performance of the model in PRISM based on the field data. Finally, we discuss how these risk factors influence the train integrity detection process.

  6. Mechanistic modeling of insecticide risks to breeding birds in North American agroecosystems

    PubMed Central

    Garber, Kristina; Odenkirchen, Edward

    2017-01-01

    Insecticide usage in the United States is ubiquitous in urban, suburban, and rural environments. There is accumulating evidence that insecticides adversely affect non-target wildlife species, including birds, causing mortality, reproductive impairment, and indirect effects through loss of prey base, and the type and magnitude of such effects differs by chemical class, or mode of action. In evaluating data for an insecticide registration application and for registration review, scientists at the United States Environmental Protection Agency (USEPA) assess the fate of the insecticide and the risk the insecticide poses to the environment and non-target wildlife. Current USEPA risk assessments for pesticides generally rely on endpoints from laboratory based toxicity studies focused on groups of individuals and do not directly assess population-level endpoints. In this paper, we present a mechanistic model, which allows risk assessors to estimate the effects of insecticide exposure on the survival and seasonal productivity of birds known to forage in agricultural fields during their breeding season. This model relies on individual-based toxicity data and translates effects into endpoints meaningful at the population level (i.e., magnitude of mortality and reproductive impairment). The model was created from two existing USEPA avian risk assessment models, the Terrestrial Investigation Model (TIM v.3.0) and the Markov Chain Nest Productivity model (MCnest). The integrated TIM/MCnest model was used to assess the relative risk of 12 insecticides applied via aerial spray to control corn pests on a suite of 31 avian species known to forage in cornfields in agroecosystems of the Midwest, USA. We found extensive differences in risk to birds among insecticides, with chlorpyrifos and malathion (organophosphates) generally posing the greatest risk, and bifenthrin and λ-cyhalothrin (pyrethroids) posing the least risk. Comparative sensitivity analysis across the 31 species showed that ecological trait parameters related to the timing of breeding and reproductive output per nest attempt offered the greatest explanatory power for predicting the magnitude of risk. An important advantage of TIM/MCnest is that it allows risk assessors to rationally combine both acute (lethal) and chronic (reproductive) effects into a single unified measure of risk. PMID:28467479

  7. Mechanistic modeling of insecticide risks to breeding birds in North American agroecosystems.

    PubMed

    Etterson, Matthew; Garber, Kristina; Odenkirchen, Edward

    2017-01-01

    Insecticide usage in the United States is ubiquitous in urban, suburban, and rural environments. There is accumulating evidence that insecticides adversely affect non-target wildlife species, including birds, causing mortality, reproductive impairment, and indirect effects through loss of prey base, and the type and magnitude of such effects differs by chemical class, or mode of action. In evaluating data for an insecticide registration application and for registration review, scientists at the United States Environmental Protection Agency (USEPA) assess the fate of the insecticide and the risk the insecticide poses to the environment and non-target wildlife. Current USEPA risk assessments for pesticides generally rely on endpoints from laboratory based toxicity studies focused on groups of individuals and do not directly assess population-level endpoints. In this paper, we present a mechanistic model, which allows risk assessors to estimate the effects of insecticide exposure on the survival and seasonal productivity of birds known to forage in agricultural fields during their breeding season. This model relies on individual-based toxicity data and translates effects into endpoints meaningful at the population level (i.e., magnitude of mortality and reproductive impairment). The model was created from two existing USEPA avian risk assessment models, the Terrestrial Investigation Model (TIM v.3.0) and the Markov Chain Nest Productivity model (MCnest). The integrated TIM/MCnest model was used to assess the relative risk of 12 insecticides applied via aerial spray to control corn pests on a suite of 31 avian species known to forage in cornfields in agroecosystems of the Midwest, USA. We found extensive differences in risk to birds among insecticides, with chlorpyrifos and malathion (organophosphates) generally posing the greatest risk, and bifenthrin and λ-cyhalothrin (pyrethroids) posing the least risk. Comparative sensitivity analysis across the 31 species showed that ecological trait parameters related to the timing of breeding and reproductive output per nest attempt offered the greatest explanatory power for predicting the magnitude of risk. An important advantage of TIM/MCnest is that it allows risk assessors to rationally combine both acute (lethal) and chronic (reproductive) effects into a single unified measure of risk.

  8. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-01

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  9. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Denguemore » and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.« less

  10. A coupled physical and economic model of the response of coastal real estate to climate risk

    NASA Astrophysics Data System (ADS)

    McNamara, Dylan E.; Keeler, Andrew

    2013-06-01

    Barring an unprecedented large-scale effort to raise island elevation, barrier-island communities common along the US East Coast are likely to eventually face inundation of the existing built environment on a timescale that depends on uncertain climatic forcing. Between the present and when a combination of sea-level rise and erosion renders these areas uninhabitable, communities must choose levels of defensive expenditures to reduce risks and individual residents must assess whether and when risk levels are unacceptably high to justify investment in housing. We model the dynamics of coastal adaptation as the interplay of underlying climatic risks, collective actions to mitigate those risks, and individual risk assessments based on beliefs in model predictions and processing of past climate events. Efforts linking physical and behavioural models to explore shoreline dynamics have not yet brought together this set of essential factors. We couple a barrier-island model with an agent-based model of real-estate markets to show that, relative to people with low belief in model predictions about climate change, informed property owners invest heavily in defensive expenditures in the near term and then abandon coastal real estate at some critical risk threshold that presages a period of significant price volatility.

  11. A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information

    NASA Astrophysics Data System (ADS)

    Ozbek, M. M.

    2003-12-01

    Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ

  12. Utility of genetic and non-genetic risk factors in predicting coronary heart disease in Singaporean Chinese.

    PubMed

    Chang, Xuling; Salim, Agus; Dorajoo, Rajkumar; Han, Yi; Khor, Chiea-Chuen; van Dam, Rob M; Yuan, Jian-Min; Koh, Woon-Puay; Liu, Jianjun; Goh, Daniel Yt; Wang, Xu; Teo, Yik-Ying; Friedlander, Yechiel; Heng, Chew-Kiat

    2017-01-01

    Background Although numerous phenotype based equations for predicting risk of 'hard' coronary heart disease are available, data on the utility of genetic information for such risk prediction is lacking in Chinese populations. Design Case-control study nested within the Singapore Chinese Health Study. Methods A total of 1306 subjects comprising 836 men (267 incident cases and 569 controls) and 470 women (128 incident cases and 342 controls) were included. A Genetic Risk Score comprising 156 single nucleotide polymorphisms that have been robustly associated with coronary heart disease or its risk factors ( p < 5 × 10 -8 ) in at least two independent cohorts of genome-wide association studies was built. For each gender, three base models were used: recalibrated Adult Treatment Panel III (ATPIII) Model (M 1 ); ATP III model fitted using Singapore Chinese Health Study data (M 2 ) and M 3 : M 2 + C-reactive protein + creatinine. Results The Genetic Risk Score was significantly associated with incident 'hard' coronary heart disease ( p for men: 1.70 × 10 -10 -1.73 × 10 -9 ; p for women: 0.001). The inclusion of the Genetic Risk Score in the prediction models improved discrimination in both genders (c-statistics: 0.706-0.722 vs. 0.663-0.695 from base models for men; 0.788-0.790 vs. 0.765-0.773 for women). In addition, the inclusion of the Genetic Risk Score also improved risk classification with a net gain of cases being reclassified to higher risk categories (men: 12.4%-16.5%; women: 10.2% (M 3 )), while not significantly reducing the classification accuracy in controls. Conclusions The Genetic Risk Score is an independent predictor for incident 'hard' coronary heart disease in our ethnic Chinese population. Inclusion of genetic factors into coronary heart disease prediction models could significantly improve risk prediction performance.

  13. A simulation framework for mapping risks in clinical processes: the case of in-patient transfers.

    PubMed

    Dunn, Adam G; Ong, Mei-Sing; Westbrook, Johanna I; Magrabi, Farah; Coiera, Enrico; Wobcke, Wayne

    2011-05-01

    To model how individual violations in routine clinical processes cumulatively contribute to the risk of adverse events in hospital using an agent-based simulation framework. An agent-based simulation was designed to model the cascade of common violations that contribute to the risk of adverse events in routine clinical processes. Clinicians and the information systems that support them were represented as a group of interacting agents using data from direct observations. The model was calibrated using data from 101 patient transfers observed in a hospital and results were validated for one of two scenarios (a misidentification scenario and an infection control scenario). Repeated simulations using the calibrated model were undertaken to create a distribution of possible process outcomes. The likelihood of end-of-chain risk is the main outcome measure, reported for each of the two scenarios. The simulations demonstrate end-of-chain risks of 8% and 24% for the misidentification and infection control scenarios, respectively. Over 95% of the simulations in both scenarios are unique, indicating that the in-patient transfer process diverges from prescribed work practices in a variety of ways. The simulation allowed us to model the risk of adverse events in a clinical process, by generating the variety of possible work subject to violations, a novel prospective risk analysis method. The in-patient transfer process has a high proportion of unique trajectories, implying that risk mitigation may benefit from focusing on reducing complexity rather than augmenting the process with further rule-based protocols.

  14. Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.

    PubMed

    Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan

    2015-08-01

    Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.

  15. Developing a Risk-scoring Model for Ankylosing Spondylitis Based on a Combination of HLA-B27, Single-nucleotide Polymorphism, and Copy Number Variant Markers.

    PubMed

    Jung, Seung-Hyun; Cho, Sung-Min; Yim, Seon-Hee; Kim, So-Hee; Park, Hyeon-Chun; Cho, Mi-La; Shim, Seung-Cheol; Kim, Tae-Hwan; Park, Sung-Hwan; Chung, Yeun-Jun

    2016-12-01

    To develop a genotype-based ankylosing spondylitis (AS) risk prediction model that is more sensitive and specific than HLA-B27 typing. To develop the AS genetic risk scoring (AS-GRS) model, 648 individuals (285 cases and 363 controls) were examined for 5 copy number variants (CNV), 7 single-nucleotide polymorphisms (SNP), and an HLA-B27 marker by TaqMan assays. The AS-GRS model was developed using logistic regression and validated with a larger independent set (576 cases and 680 controls). Through logistic regression, we built the AS-GRS model consisting of 5 genetic components: HLA-B27, 3 CNV (1q32.2, 13q13.1, and 16p13.3), and 1 SNP (rs10865331). All significant associations of genetic factors in the model were replicated in the independent validation set. The discriminative ability of the AS-GRS model measured by the area under the curve was excellent: 0.976 (95% CI 0.96-0.99) in the model construction set and 0.951 (95% CI 0.94-0.96) in the validation set. The AS-GRS model showed higher specificity and accuracy than the HLA-B27-only model when the sensitivity was set to over 94%. When we categorized the individuals into quartiles based on the AS-GRS scores, OR of the 4 groups (low, intermediate-1, intermediate-2, and high risk) showed an increasing trend with the AS-GRS scores (r 2 = 0.950) and the highest risk group showed a 494× higher risk of AS than the lowest risk group (95% CI 237.3-1029.1). Our AS-GRS could be used to identify individuals at high risk for AS before major symptoms appear, which may improve the prognosis for them through early treatment.

  16. An RES-Based Model for Risk Assessment and Prediction of Backbreak in Bench Blasting

    NASA Astrophysics Data System (ADS)

    Faramarzi, F.; Ebrahimi Farsangi, M. A.; Mansouri, H.

    2013-07-01

    Most blasting operations are associated with various forms of energy loss, emerging as environmental side effects of rock blasting, such as flyrock, vibration, airblast, and backbreak. Backbreak is an adverse phenomenon in rock blasting operations, which imposes risk and increases operation expenses because of safety reduction due to the instability of walls, poor fragmentation, and uneven burden in subsequent blasts. In this paper, based on the basic concepts of a rock engineering systems (RES) approach, a new model for the prediction of backbreak and the risk associated with a blast is presented. The newly suggested model involves 16 effective parameters on backbreak due to blasting, while retaining simplicity as well. The data for 30 blasts, carried out at Sungun copper mine, western Iran, were used to predict backbreak and the level of risk corresponding to each blast by the RES-based model. The results obtained were compared with the backbreak measured for each blast, which showed that the level of risk achieved is in consistence with the backbreak measured. The maximum level of risk [vulnerability index (VI) = 60] was associated with blast No. 2, for which the corresponding average backbreak was the highest achieved (9.25 m). Also, for blasts with levels of risk under 40, the minimum average backbreaks (<4 m) were observed. Furthermore, to evaluate the model performance for backbreak prediction, the coefficient of correlation ( R 2) and root mean square error (RMSE) of the model were calculated ( R 2 = 0.8; RMSE = 1.07), indicating the good performance of the model.

  17. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  18. Globally-Applicable Predictive Wildfire Model   a Temporal-Spatial GIS Based Risk Analysis Using Data Driven Fuzzy Logic Functions

    NASA Astrophysics Data System (ADS)

    van den Dool, G.

    2017-11-01

    This study (van den Dool, 2017) is a proof of concept for a global predictive wildfire model, in which the temporal-spatial characteristics of wildfires are placed in a Geographical Information System (GIS), and the risk analysis is based on data-driven fuzzy logic functions. The data sources used in this model are available as global datasets, but subdivided into three pilot areas: North America (California/Nevada), Europe (Spain), and Asia (Mongolia), and are downscaled to the highest resolution (3-arc second). The GIS is constructed around three themes: topography, fuel availability and climate. From the topographical data, six derived sub-themes are created and converted to a fuzzy membership based on the catchment area statistics. The fuel availability score is a composite of four data layers: land cover, wood loads, biomass, biovolumes. As input for the climatological sub-model reanalysed daily averaged, weather-related data is used, which is accumulated to a global weekly time-window (to account for the uncertainty within the climatological model) and forms the temporal component of the model. The final product is a wildfire risk score (from 0 to 1) by week, representing the average wildfire risk in an area. To compute the potential wildfire risk the sub-models are combined usinga Multi-Criteria Approach, and the model results are validated against the area under the Receiver Operating Characteristic curve.

  19. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    NASA Astrophysics Data System (ADS)

    Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng

    2017-05-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.

  20. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data

    PubMed Central

    Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O.

    2018-01-01

    Background Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. Methods We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010–2015 was analyzed. Results The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. Conclusions The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality. PMID:29558486

  1. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data.

    PubMed

    Schwarzkopf, Daniel; Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O

    2018-01-01

    Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010-2015 was analyzed. The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality.

  2. Urothelial cancer of the upper urinary tract: emerging biomarkers and integrative models for risk stratification.

    PubMed

    Mathieu, Romain; Vartolomei, Mihai D; Mbeutcha, Aurélie; Karakiewicz, Pierre I; Briganti, Alberto; Roupret, Morgan; Shariat, Shahrokh F

    2016-08-01

    The aim of this review was to provide an overview of current biomarkers and risk stratification models in urothelial cancer of the upper urinary tract (UTUC). A non-systematic Medline/PubMed literature search was performed using the terms "biomarkers", "preoperative models", "postoperative models", "risk stratification", together with "upper tract urothelial carcinoma". Original articles published between January 2003 and August 2015 were included based on their clinical relevance. Additional references were collected by cross referencing the bibliography of the selected articles. Various promising predictive and prognostic biomarkers have been identified in UTUC thanks to the increasing knowledge of the different biological pathways involved in UTUC tumorigenesis. These biomarkers may help identify tumors with aggressive biology and worse outcomes. Current tools aim at predicting muscle invasive or non-organ confined disease, renal failure after radical nephroureterectomy and survival outcomes. These models are still mainly based on imaging and clinicopathological feature and none has integrated biomarkers. Risk stratification in UTUC is still suboptimal, especially in the preoperative setting due to current limitations in staging and grading. Identification of novel biomarkers and external validation of current prognostic models may help improve risk stratification to allow evidence-based counselling for kidney-sparing approaches, perioperative chemotherapy and/or risk-based surveillance. Despite growing understanding of the biology underlying UTUC, management of this disease remains difficult due to the lack of validated biomarkers and the limitations of current predictive and prognostic tools. Further efforts and collaborations are necessaryry to allow their integration in daily practice.

  3. The Role of Inertia in Modeling Decisions from Experience with Instance-Based Learning

    PubMed Central

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model. PMID:22685443

  4. The role of inertia in modeling decisions from experience with instance-based learning.

    PubMed

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model.

  5. Evaluation of portfolio credit risk based on survival analysis for progressive censored data

    NASA Astrophysics Data System (ADS)

    Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd

    2017-04-01

    In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.

  6. Risk Unbound: Threat, Catastrophe, and the End of Homeland Security

    DTIC Science & Technology

    2015-09-01

    Defense (DOD) models ) is now the prevalent model for developing plans.63 Capabilities- based within the national preparedness system is defined as...capabilities- based planning is the accounting for scenarios through organizational capability development , and the search for commonality and structure...of providing perfect security, and demonstrate the limitations of risk- based security practices. This thesis presents an argument in three parts

  7. Application of Bayesian networks in a hierarchical structure for environmental risk assessment: a case study of the Gabric Dam, Iran.

    PubMed

    Malekmohammadi, Bahram; Tayebzadeh Moghadam, Negar

    2018-04-13

    Environmental risk assessment (ERA) is a commonly used, effective tool applied to reduce adverse effects of environmental risk factors. In this study, ERA was investigated using the Bayesian network (BN) model based on a hierarchical structure of variables in an influence diagram (ID). ID facilitated ranking of the different alternatives under uncertainty that were then used to evaluate comparisons of the different risk factors. BN was used to present a new model for ERA applicable to complicated development projects such as dam construction. The methodology was applied to the Gabric Dam, in southern Iran. The main environmental risk factors in the region, presented by the Gabric Dam, were identified based on the Delphi technique and specific features of the study area. These included the following: flood, water pollution, earthquake, changes in land use, erosion and sedimentation, effects on the population, and ecosensitivity. These risk factors were then categorized based on results from the output decision node of the BN, including expected utility values for risk factors in the decision node. ERA was performed for the Gabric Dam using the analytical hierarchy process (AHP) method to compare results of BN modeling with those of conventional methods. Results determined that a BN-based hierarchical structure to ERA present acceptable and reasonable risk assessment prioritization in proposing suitable solutions to reduce environmental risks and can be used as a powerful decision support system for evaluating environmental risks.

  8. Quantitative Method for Analyzing the Allocation of Risks in Transportation Construction

    DOT National Transportation Integrated Search

    1979-04-01

    The report presents a conceptual model of risk that was developed to analyze the impact on owner's cost of alternate allocations of risk among owner and contractor in mass transit construction. A model and analysis procedure are developed, based on d...

  9. Diagnosis-based Cost Groups in the Dutch Risk-equalization Model: Effects of Clustering Diagnoses and of Allowing Patients to be Classified into Multiple Risk-classes.

    PubMed

    Eijkenaar, Frank; van Vliet, René C J A; van Kleef, Richard C

    2018-01-01

    The risk-equalization (RE) model in the Dutch health insurance market has evolved to a sophisticated model containing direct proxies for health. However, it still has important imperfections, leaving incentives for risk selection. This paper focuses on refining an important health-based risk-adjuster in this model: the diagnosis-based costs groups (DCGs). The current (2017) DCGs are calibrated on "old" data of 2011/2012, are mutually exclusive, and are essentially clusters of about 200 diagnosis-groups ("dxgroups"). Hospital claims data (2013), administrative data (2014) on costs and risk-characteristics for the entire Dutch population (N≈16.9 million), and health survey data (2012, N≈387,000) are used. The survey data are used to identify subgroups of individuals in poor or in good health. The claims and administrative data are used to develop alternative DCG-modalities to examine the impact on individual-level and group-level fit of recalibrating the DCGs based on new data, of allowing patients to be classified in multiple DCGs, and of refraining from clustering. Recalibrating the DCGs and allowing enrolees to be classified into multiple DCGs lead to nontrivial improvements in individual-level and group-level fit (especially for cancer patients and people with comorbid conditions). The improvement resulting from refraining from clustering does not seem to justify the increase in model complexity this would entail. The performance of the sophisticated Dutch RE-model can be improved by allowing classification in multiple (clustered) DCGs and using new data. Irrespective of the modality used, however, various subgroups remain significantly undercompensated. Further improvement of the RE-model merits high priority.

  10. Bridging the etiologic and prognostic outlooks in individualized assessment of absolute risk of an illness: application in lung cancer.

    PubMed

    Karp, Igor; Sylvestre, Marie-Pierre; Abrahamowicz, Michal; Leffondré, Karen; Siemiatycki, Jack

    2016-11-01

    Assessment of individual risk of illness is an important activity in preventive medicine. Development of risk-assessment models has heretofore relied predominantly on studies involving follow-up of cohort-type populations, while case-control studies have generally been considered unfit for this purpose. To present a method for individualized assessment of absolute risk of an illness (as illustrated by lung cancer) based on data from a 'non-nested' case-control study. We used data from a case-control study conducted in Montreal, Canada in 1996-2001. Individuals diagnosed with lung cancer (n = 920) and age- and sex-matched lung-cancer-free subjects (n = 1288) completed questionnaires documenting life-time cigarette-smoking history and occupational, medical, and family history. Unweighted and weighted logistic models were fitted. Model overfitting was assessed using bootstrap-based cross-validation and 'shrinkage.' The discriminating ability was assessed by the c-statistic, and the risk-stratifying performance was assessed by examination of the variability in risk estimates over hypothetical risk-profiles. In the logistic models, the logarithm of incidence-density of lung cancer was expressed as a function of age, sex, cigarette-smoking history, history of respiratory conditions and exposure to occupational carcinogens, and family history of lung cancer. The models entailed a minimal degree of overfitting ('shrinkage' factor: 0.97 for both unweighted and weighted models) and moderately high discriminating ability (c-statistic: 0.82 for the unweighted model and 0.66 for the weighted model). The method's risk-stratifying performance was quite high. The presented method allows for individualized assessment of risk of lung cancer and can be used for development of risk-assessment models for other illnesses.

  11. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    NASA Astrophysics Data System (ADS)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  12. Understanding the effects of different HIV transmission models in individual-based microsimulation of HIV epidemic dynamics in people who inject drugs

    PubMed Central

    MONTEIRO, J.F.G.; ESCUDERO, D.J.; WEINREB, C.; FLANIGAN, T.; GALEA, S.; FRIEDMAN, S.R.; MARSHALL, B.D.L.

    2017-01-01

    SUMMARY We investigated how different models of HIV transmission, and assumptions regarding the distribution of unprotected sex and syringe-sharing events (‘risk acts’), affect quantitative understanding of HIV transmission process in people who inject drugs (PWID). The individual-based model simulated HIV transmission in a dynamic sexual and injecting network representing New York City. We constructed four HIV transmission models: model 1, constant probabilities; model 2, random number of sexual and parenteral acts; model 3, viral load individual assigned; and model 4, two groups of partnerships (low and high risk). Overall, models with less heterogeneity were more sensitive to changes in numbers risk acts, producing HIV incidence up to four times higher than that empirically observed. Although all models overestimated HIV incidence, micro-simulations with greater heterogeneity in the HIV transmission modelling process produced more robust results and better reproduced empirical epidemic dynamics. PMID:26753627

  13. People's Risk Recognition Preceding Evacuation and Its Role in Demand Modeling and Planning.

    PubMed

    Urata, Junji; Pel, Adam J

    2018-05-01

    Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate. © 2017 Society for Risk Analysis.

  14. APPROACHES FOR INCORPORATING NON-CHEMICAL STRESSORS INTO CUMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    Over the past twenty years, the risk assessment paradigm has gradually shifted from an individual chemical approach to a community-based model. Inherent in community-based risk assessment is consideration of the totality of stressors affecting a defined population including both ...

  15. Conceptual Framework for Trait-Based Ecological Risk Assessment for Wildlife Populations Exposed to Pesticides

    EPA Science Inventory

    Between screening level risk assessments and complex ecological models, a need exists for practical identification of risk based on general information about species, chemicals, and exposure scenarios. Several studies have identified demographic, biological, and toxicological fa...

  16. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  17. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    PubMed

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  18. [Case finding in early prevention networks - a heuristic for ambulatory care settings].

    PubMed

    Barth, Michael; Belzer, Florian

    2016-06-01

    One goal of early prevention is the support of families with small children up to three years who are exposed to psychosocial risks. The identification of these cases is often complex and not well-directed, especially in the ambulatory care setting. Development of a model of a feasible and empirical based strategy for case finding in ambulatory care. Based on the risk factors of postpartal depression, lack of maternal responsiveness, parental stress with regulation disorders and poverty a lexicographic and non-compensatory heuristic model with simple decision rules, will be constructed and empirically tested. Therefore the original data set from an evaluation of the pediatric documentary form on psychosocial issues of families with small children in well-child visits will be used and reanalyzed. The first diagnostic step in the non-compensatory and hierarchical classification process is the assessment of postpartal depression followed by maternal responsiveness, parental stress and poverty. The classification model identifies 89.0 % cases from the original study. Compared to the original study the decision process becomes clearer and more concise. The evidence-based and data-driven model exemplifies a strategy for the assessment of psychosocial risk factors in ambulatory care settings. It is based on four evidence-based risk factors and offers a quick and reliable classification. A further advantage of this model is that after a risk factor is identified the diagnostic procedure will be stopped and the counselling process can commence. For further validation of the model studies, in well suited early prevention networks are needed.

  19. Validation of a new mortality risk prediction model for people 65 years and older in northwest Russia: The Crystal risk score.

    PubMed

    Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie

    2017-07-01

    Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Development of a claims-based risk score to identify obese individuals.

    PubMed

    Clark, Jeanne M; Chang, Hsien-Yen; Bolen, Shari D; Shore, Andrew D; Goodwin, Suzanne M; Weiner, Jonathan P

    2010-08-01

    Obesity is underdiagnosed, hampering system-based health promotion and research. Our objective was to develop and validate a claims-based risk model to identify obese persons using medical diagnosis and prescription records. We conducted a cross-sectional analysis of de-identified claims data from enrollees of 3 Blue Cross Blue Shield plans who completed a health risk assessment capturing height and weight. The final sample of 71,057 enrollees was randomly split into 2 subsamples for development and validation of the obesity risk model. Using the Johns Hopkins Adjusted Clinical Groups case-mix/predictive risk methodology, we categorized study members' diagnosis (ICD) codes. Logistic regression was used to determine which claims-based risk markers were associated with a body mass index (BMI) > or = 35 kg/m(2). The sensitivities of the scores > or =90(th) percentile to detect obesity were 26% to 33%, while the specificities were >90%. The areas under the receiver operator curve ranged from 0.67 to 0.73. In contrast, a diagnosis of obesity or an obesity medication alone had very poor sensitivity (10% and 1%, respectively); the obesity risk model identified an additional 22% of obese members. Varying the percentile cut-point from the 70(th) to the 99(th) percentile resulted in positive predictive values ranging from 15.5 to 59.2. An obesity risk score was highly specific for detecting a BMI > or = 35 kg/m(2) and substantially increased the detection of obese members beyond a provider-coded obesity diagnosis or medication claim. This model could be used for obesity care management and health promotion or for obesity-related research.

  1. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study.

    PubMed

    Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A

    2016-09-21

    Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve values of >0.8 were necessary to achieve reasonable risk stratification capacity. Our findings provide a guide for researchers to estimate the expected performance of a prediction model before a model has been built based on the characteristics of available predictors.

  2. A contemporary risk model for predicting 30-day mortality following percutaneous coronary intervention in England and Wales.

    PubMed

    McAllister, Katherine S L; Ludman, Peter F; Hulme, William; de Belder, Mark A; Stables, Rodney; Chowdhary, Saqib; Mamas, Mamas A; Sperrin, Matthew; Buchan, Iain E

    2016-05-01

    The current risk model for percutaneous coronary intervention (PCI) in the UK is based on outcomes of patients treated in a different era of interventional cardiology. This study aimed to create a new model, based on a contemporary cohort of PCI treated patients, which would: predict 30 day mortality; provide good discrimination; and be well calibrated across a broad risk-spectrum. The model was derived from a training dataset of 336,433 PCI cases carried out between 2007 and 2011 in England and Wales, with 30 day mortality provided by record linkage. Candidate variables were selected on the basis of clinical consensus and data quality. Procedures in 2012 were used to perform temporal validation of the model. The strongest predictors of 30-day mortality were: cardiogenic shock; dialysis; and the indication for PCI and the degree of urgency with which it was performed. The model had an area under the receiver operator characteristic curve of 0.85 on the training data and 0.86 on validation. Calibration plots indicated a good model fit on development which was maintained on validation. We have created a contemporary model for PCI that encompasses a range of clinical risk, from stable elective PCI to emergency primary PCI and cardiogenic shock. The model is easy to apply and based on data reported in national registries. It has a high degree of discrimination and is well calibrated across the risk spectrum. The examination of key outcomes in PCI audit can be improved with this risk-adjusted model. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. Bayesian algorithm implementation in a real time exposure assessment model on benzene with calculation of associated cancer risks.

    PubMed

    Sarigiannis, Dimosthenis A; Karakitsios, Spyros P; Gotti, Alberto; Papaloukas, Costas L; Kassomenos, Pavlos A; Pilidis, Georgios A

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.

  4. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    PubMed Central

    Sarigiannis, Dimosthenis A.; Karakitsios, Spyros P.; Gotti, Alberto; Papaloukas, Costas L.; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations. PMID:22399936

  5. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    PubMed

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  6. Latent variable model for suicide risk in relation to social capital and socio-economic status.

    PubMed

    Congdon, Peter

    2012-08-01

    There is little evidence on the association between suicide outcomes (ideation, attempts, self-harm) and social capital. This paper investigates such associations using a structural equation model based on health survey data, and allowing for both individual and contextual risk factors. Social capital and other major risk factors for suicide, namely socioeconomic status and social isolation, are modelled as latent variables that are proxied (or measured) by observed indicators or question responses for survey subjects. These latent scales predict suicide risk in the structural component of the model. Also relevant to explaining suicide risk are contextual variables, such as area deprivation and region of residence, as well as the subject's demographic status. The analysis is based on the 2007 Adult Psychiatric Morbidity Survey and includes 7,403 English subjects. A Bayesian modelling strategy is used. Models with and without social capital as a predictor of suicide risk are applied. A benefit to statistical fit is demonstrated when social capital is added as a predictor. Social capital varies significantly by geographic context variables (neighbourhood deprivation, region), and this impacts on the direct effects of these contextual variables on suicide risk. In particular, area deprivation is not confirmed as a distinct significant influence. The model develops a suicidality risk score incorporating social capital, and the success of this risk score in predicting actual suicide events is demonstrated. Social capital as reflected in neighbourhood perceptions is a significant factor affecting risks of different types of self-harm and may mediate the effects of other contextual variables such as area deprivation.

  7. Construction risk assessment of deep foundation pit in metro station based on G-COWA method

    NASA Astrophysics Data System (ADS)

    You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying

    2018-05-01

    In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.

  8. Including operational data in QMRA model: development and impact of model inputs.

    PubMed

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  9. Determining Risk of Barrett's Esophagus and Esophageal Adenocarcinoma Based on Epidemiologic Factors and Genetic Variants.

    PubMed

    Dong, Jing; Buas, Matthew F; Gharahkhani, Puya; Kendall, Bradley J; Onstad, Lynn; Zhao, Shanshan; Anderson, Lesley A; Wu, Anna H; Ye, Weimin; Bird, Nigel C; Bernstein, Leslie; Chow, Wong-Ho; Gammon, Marilie D; Liu, Geoffrey; Caldas, Carlos; Pharoah, Paul D; Risch, Harvey A; Iyer, Prasad G; Reid, Brian J; Hardie, Laura J; Lagergren, Jesper; Shaheen, Nicholas J; Corley, Douglas A; Fitzgerald, Rebecca C; Whiteman, David C; Vaughan, Thomas L; Thrift, Aaron P

    2018-04-01

    We developed comprehensive models to determine risk of Barrett's esophagus (BE) or esophageal adenocarcinoma (EAC) based on genetic and non-genetic factors. We used pooled data from 3288 patients with BE, 2511 patients with EAC, and 2177 individuals without either (controls) from participants in the international Barrett's and EAC consortium as well as the United Kingdom's BE gene study and stomach and esophageal cancer study. We collected data on 23 genetic variants associated with risk for BE or EAC, and constructed a polygenic risk score (PRS) for cases and controls by summing the risk allele counts for the variants weighted by their natural log-transformed effect estimates (odds ratios) extracted from genome-wide association studies. We also collected data on demographic and lifestyle factors (age, sex, smoking, body mass index, use of nonsteroidal anti-inflammatory drugs) and symptoms of gastroesophageal reflux disease (GERD). Risk models with various combinations of non-genetic factors and the PRS were compared for their accuracy in identifying patients with BE or EAC using the area under the receiver operating characteristic curve (AUC) analysis. Individuals in the highest quartile of risk, based on genetic factors (PRS), had a 2-fold higher risk of BE (odds ratio, 2.22; 95% confidence interval, 1.89-2.60) or EAC (odds ratio, 2.46; 95% confidence interval, 2.07-2.92) than individual in the lowest quartile of risk based on PRS. Risk models developed based on only demographic or lifestyle factors or GERD symptoms identified patients with BE or EAC with AUC values ranging from 0.637 to 0.667. Combining data on demographic or lifestyle factors with data on GERD symptoms identified patients with BE with an AUC of 0.793 and patients with EAC with an AUC of 0.745. Including PRSs with these data only minimally increased the AUC values for BE (to 0.799) and EAC (to 0.754). Including the PRSs in the model developed based on non-genetic factors resulted in a net reclassification improvement for BE of 3.0% and for EAC of 5.6%. We used data from 3 large databases of patients from studies of BE or EAC to develop a risk prediction model based on genetic, clinical, and demographic/lifestyle factors. We identified a PRS that increases discrimination and net reclassification of individuals with vs without BE and EAC. However, the absolute magnitude of improvement is not sufficient to justify its clinical use. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  10. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models.

    PubMed

    Amarasingham, Ruben; Velasco, Ferdinand; Xie, Bin; Clark, Christopher; Ma, Ying; Zhang, Song; Bhat, Deepa; Lucena, Brian; Huesch, Marco; Halm, Ethan A

    2015-05-20

    There is increasing interest in using prediction models to identify patients at risk of readmission or death after hospital discharge, but existing models have significant limitations. Electronic medical record (EMR) based models that can be used to predict risk on multiple disease conditions among a wide range of patient demographics early in the hospitalization are needed. The objective of this study was to evaluate the degree to which EMR-based risk models for 30-day readmission or mortality accurately identify high risk patients and to compare these models with published claims-based models. Data were analyzed from all consecutive adult patients admitted to internal medicine services at 7 large hospitals belonging to 3 health systems in Dallas/Fort Worth between November 2009 and October 2010 and split randomly into derivation and validation cohorts. Performance of the model was evaluated against the Canadian LACE mortality or readmission model and the Centers for Medicare and Medicaid Services (CMS) Hospital Wide Readmission model. Among the 39,604 adults hospitalized for a broad range of medical reasons, 2.8% of patients died, 12.7% were readmitted, and 14.7% were readmitted or died within 30 days after discharge. The electronic multicondition models for the composite outcome of 30-day mortality or readmission had good discrimination using data available within 24 h of admission (C statistic 0.69; 95% CI, 0.68-0.70), or at discharge (0.71; 95% CI, 0.70-0.72), and were significantly better than the LACE model (0.65; 95% CI, 0.64-0.66; P =0.02) with significant NRI (0.16) and IDI (0.039, 95% CI, 0.035-0.044). The electronic multicondition model for 30-day readmission alone had good discrimination using data available within 24 h of admission (C statistic 0.66; 95% CI, 0.65-0.67) or at discharge (0.68; 95% CI, 0.67-0.69), and performed significantly better than the CMS model (0.61; 95% CI, 0.59-0.62; P < 0.01) with significant NRI (0.20) and IDI (0.037, 95% CI, 0.033-0.041). A new electronic multicondition model based on information derived from the EMR predicted mortality and readmission at 30 days, and was superior to previously published claims-based models.

  11. RESIDUAL RISK ASSESSMENTS - FINAL RESIDUAL RISK ASSESSMENT FOR SECONDARY LEAD SMELTERS

    EPA Science Inventory

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Secondary Lead Smelters. These assesments utilize existing models and data bases to examin...

  12. A bootstrap based space-time surveillance model with an application to crime occurrences

    NASA Astrophysics Data System (ADS)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  13. Sensors vs. experts - a performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients.

    PubMed

    Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike

    2011-06-28

    Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.

  14. Sensors vs. experts - A performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients

    PubMed Central

    2011-01-01

    Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504

  15. A biological approach to characterizing exposure to metalworking fluids and risk of prostate cancer (United States).

    PubMed

    Agalliu, Ilir; Eisen, Ellen A; Kriebel, David; Quinn, Margaret M; Wegman, David H

    2005-05-01

    Prostate cancer is hormone-related and chemicals that interfere with hormones may contribute to carcinogenesis. In a cohort of autoworkers we characterized exposure to metalworking fluids (MWF) into age windows with homogenous biological risk for prostate cancer, and examined exposure-response relationships using semi-parametric modeling. Incident cases (n=872) were identified via Michigan cancer registry from 1985 through 2000. Controls were selected using incidence-density sampling, 5:1 ratio. Using a hormonal-based model, exposure was accumulated in three windows: (1) late puberty, (2) adulthood, and (3) middle age. We used penalized splines to model risk as a smooth function of exposure, and controlled for race and calendar year of diagnosis in a Cox model. Risk of prostate cancer linearly increased with exposure to straight MWF in the first window, with a relative risk of 2.4 per 10 mg/m(3)-years. Autoworkers exposed to MWF at a young age also had an increased risk associated with MWF exposure incurred later in life. For soluble MWF there was a slightly increased risk in the third window. Exposure characterization based on a hormonal model identified heightened risk with early age of exposure to straight MWF. Results also support a long latency period for exposure related prostate cancer.

  16. Vehicle crashworthiness ratings in Australia.

    PubMed

    Cameron, M; Mach, T; Neiger, D; Graham, A; Ramsay, R; Pappas, M; Haley, J

    1994-08-01

    The paper reviews the published vehicle safety ratings based on mass crash data from the United States, Sweden, and Great Britain. It then describes the development of vehicle crashworthiness ratings based on injury compensation claims and police accident reports from Victoria and New South Wales, the two most populous states in Australia. Crashworthiness was measured by a combination of injury severity (of injured drivers) and injury risk (of drivers involved in crashes). Injury severity was based on 22,600 drivers injured in crashes in the two states. Injury risk was based on 70,900 drivers in New South Wales involved in crashes after which a vehicle was towed away. Injury risk measured in this way was compared with the "relative injury risk" of particular model cars involved in two car crashes in Victoria (where essentially only casualty crashes are reported), which was based on the method developed by Folksam Insurance in Sweden from Evans' double-pair comparison method. The results include crashworthiness ratings for the makes and models crashing in Australia in sufficient numbers to measure their crash performance adequately. The ratings were normalised for the driver sex and speed limit at the crash location, the two factors found to be strongly related to injury risk and/or severity and to vary substantially across makes and models of Australian crash-involved cars. This allows differences in crashworthiness of individual models to be seen, uncontaminated by major crash exposure differences.

  17. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    PubMed

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-10-01

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.

  18. 68Ga-PSMA-617 PET/CT: a promising new technique for predicting risk stratification and metastatic risk of prostate cancer patients.

    PubMed

    Liu, Chen; Liu, Teli; Zhang, Ning; Liu, Yiqiang; Li, Nan; Du, Peng; Yang, Yong; Liu, Ming; Gong, Kan; Yang, Xing; Zhu, Hua; Yan, Kun; Yang, Zhi

    2018-05-02

    The purpose of this study was to investigate the performance of 68 Ga-PSMA-617 PET/CT in predicting risk stratification and metastatic risk of prostate cancer. Fifty newly diagnosed patients with prostate cancer as confirmed by needle biopsy were continuously included, 40 in a train set and ten in a test set. 68 Ga-PSMA-617 PET/CT and clinical data of all patients were retrospectively analyzed. Semi-quantitative analysis of PET images provided maximum standardized uptake (SUVmax) of primary prostate cancer and volumetric parameters including intraprostatic PSMA-derived tumor volume (iPSMA-TV) and intraprostatic total lesion PSMA (iTL-PSMA). According to prostate cancer risk stratification criteria of the NCCN Guideline, all patients were simplified into a low-intermediate risk group or a high-risk group. The semi-quantitative parameters of 68 Ga-PSMA-617 PET/CT were used to establish a univariate logistic regression model for high-risk prostate cancer and its metastatic risk, and to evaluate the diagnostic efficacy of the predictive model. In the train set, 30/40 (75%) patients had high-risk prostate cancer and 10/40 (25%) patients had low-to-moderate-risk prostate cancer; in the test set, 8/10 (80%) patients had high-risk prostate cancer while 2/10 (20%) had low-intermediate risk prostate cancer. The univariate logistic regression model established with SUVmax, iPSMA-TV and iTL-PSMA could all effectively predict high-risk prostate cancer; the AUC of ROC were 0.843, 0.802 and 0.900, respectively. Based on the test set, the sensitivity and specificity of each model were 87.5% and 50% for SUVmax, 62.5% and 100% for iPSMA-TV, and 87.5% and 100% for iTL-PSMA, respectively. The iPSMA-TV and iTL-PSMA-based predictive model could predict the metastatic risk of prostate cancer, the AUC of ROC was 0.863 and 0.848, respectively, but the SUVmax-based prediction model could not predict metastatic risk. Semi-quantitative analysis indexes of 68 Ga-PSMA-617 PET/CT imaging can be used as "imaging biomarkers" to predict risk stratification and metastatic risk of prostate cancer.

  19. Analysis of dengue fever risk using geostatistics model in bone regency

    NASA Astrophysics Data System (ADS)

    Amran, Stang, Mallongi, Anwar

    2017-03-01

    This research aim is to analysis of dengue fever risk based on Geostatistics model in Bone Regency. Risk levels of dengue fever are denoted by parameter of Binomial distribution. Effect of temperature, rainfalls, elevation, and larvae abundance are investigated through Geostatistics model. Bayesian hierarchical method is used in estimation process. Using dengue fever data in eleven locations this research shows that temperature and rainfall have significant effect of dengue fever risk in Bone regency.

  20. Common carotid intima-media thickness does not add to Framingham risk score in individuals with diabetes mellitus: the USE-IMT initiative.

    PubMed

    den Ruijter, H M; Peters, S A E; Groenewegen, K A; Anderson, T J; Britton, A R; Dekker, J M; Engström, G; Eijkemans, M J; Evans, G W; de Graaf, J; Grobbee, D E; Hedblad, B; Hofman, A; Holewijn, S; Ikeda, A; Kavousi, M; Kitagawa, K; Kitamura, A; Koffijberg, H; Ikram, M A; Lonn, E M; Lorenz, M W; Mathiesen, E B; Nijpels, G; Okazaki, S; O'Leary, D H; Polak, J F; Price, J F; Robertson, C; Rembold, C M; Rosvall, M; Rundek, T; Salonen, J T; Sitzer, M; Stehouwer, C D A; Witteman, J C; Moons, K G; Bots, M L

    2013-07-01

    The aim of this work was to investigate whether measurement of the mean common carotid intima-media thickness (CIMT) improves cardiovascular risk prediction in individuals with diabetes. We performed a subanalysis among 4,220 individuals with diabetes in a large ongoing individual participant data meta-analysis involving 56,194 subjects from 17 population-based cohorts worldwide. We first refitted the risk factors of the Framingham heart risk score on the individuals without previous cardiovascular disease (baseline model) and then expanded this model with the mean common CIMT (CIMT model). The absolute 10 year risk for developing a myocardial infarction or stroke was estimated from both models. In individuals with diabetes we compared discrimination and calibration of the two models. Reclassification of individuals with diabetes was based on allocation to another cardiovascular risk category when mean common CIMT was added. During a median follow-up of 8.7 years, 684 first-time cardiovascular events occurred among the population with diabetes. The C statistic was 0.67 for the Framingham model and 0.68 for the CIMT model. The absolute 10 year risk for developing a myocardial infarction or stroke was 16% in both models. There was no net reclassification improvement with the addition of mean common CIMT (1.7%; 95% CI -1.8, 3.8). There were no differences in the results between men and women. There is no improvement in risk prediction in individuals with diabetes when measurement of the mean common CIMT is added to the Framingham risk score. Therefore, this measurement is not recommended for improving individual cardiovascular risk stratification in individuals with diabetes.

  1. A Predictive Risk Model for A(H7N9) Human Infections Based on Spatial-Temporal Autocorrelation and Risk Factors: China, 2013–2014

    PubMed Central

    Dong, Wen; Yang, Kun; Xu, Quan-Li; Yang, Yu-Lian

    2015-01-01

    This study investigated the spatial distribution, spatial autocorrelation, temporal cluster, spatial-temporal autocorrelation and probable risk factors of H7N9 outbreaks in humans from March 2013 to December 2014 in China. The results showed that the epidemic spread with significant spatial-temporal autocorrelation. In order to describe the spatial-temporal autocorrelation of H7N9, an improved model was developed by introducing a spatial-temporal factor in this paper. Logistic regression analyses were utilized to investigate the risk factors associated with their distribution, and nine risk factors were significantly associated with the occurrence of A(H7N9) human infections: the spatial-temporal factor φ (OR = 2546669.382, p < 0.001), migration route (OR = 0.993, p < 0.01), river (OR = 0.861, p < 0.001), lake(OR = 0.992, p < 0.001), road (OR = 0.906, p < 0.001), railway (OR = 0.980, p < 0.001), temperature (OR = 1.170, p < 0.01), precipitation (OR = 0.615, p < 0.001) and relative humidity (OR = 1.337, p < 0.001). The improved model obtained a better prediction performance and a higher fitting accuracy than the traditional model: in the improved model 90.1% (91/101) of the cases during February 2014 occurred in the high risk areas (the predictive risk > 0.70) of the predictive risk map, whereas 44.6% (45/101) of which overlaid on the high risk areas (the predictive risk > 0.70) for the traditional model, and the fitting accuracy of the improved model was 91.6% which was superior to the traditional model (86.1%). The predictive risk map generated based on the improved model revealed that the east and southeast of China were the high risk areas of A(H7N9) human infections in February 2014. These results provided baseline data for the control and prevention of future human infections. PMID:26633446

  2. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  3. Conditional Toxicity Value (CTV) Predictor: An In Silico Approach for Generating Quantitative Risk Estimates for Chemicals.

    PubMed

    Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A

    2018-05-01

    Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.

  4. Early warning model based on correlated networks in global crude oil markets

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Wei; Xie, Wen-Jie; Jiang, Zhi-Qiang

    2018-01-01

    Applying network tools on predicting and warning the systemic risks provides a novel avenue to manage risks in financial markets. Here, we construct a series of global crude oil correlated networks based on the historical 57 oil prices covering a period from 1993 to 2012. Two systemic risk indicators are constructed based on the density and modularity of correlated networks. The local maximums of the risk indicators are found to have the ability to predict the trends of oil prices. In our sample periods, the indicator based on the network density sends five signals and the indicator based on the modularity index sends four signals. The four signals sent by both indicators are able to warn the drop of future oil prices and the signal only sent by the network density is followed by a huge rise of oil prices. Our results deepen the application of network measures on building early warning models of systemic risks and can be applied to predict the trends of future prices in financial markets.

  5. A model study of the Haihe river passenger ferry risk based on AHP

    NASA Astrophysics Data System (ADS)

    Du, Jinyin; Xu, Yanming; Du, Chunzhi; Jin, Zhenhua

    2017-05-01

    The core function of maritime is water safety supervision, whose emphasis and difficulty is ferry. In combination with the practical situation of Haihe river passenger ferry operation management, this paper analyzes Haihe river passenger ferry risk from four aspects "human, machinery, environment and management", and establishes the ferry risk index system. By using AHP (Analytic Hierarchy Process), the ferry risk evaluation model is established. By using the ferry model, the application of Ferry Zhengyanfa7 in Tianjin Haihe river crossing is evaluated, whose safety situation is verified to be between "relatively high risk" and "high risk".

  6. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  7. Ecological covariates based predictive model of malaria risk in the state of Chhattisgarh, India.

    PubMed

    Kumar, Rajesh; Dash, Chinmaya; Rani, Khushbu

    2017-09-01

    Malaria being an endemic disease in the state of Chhattisgarh and ecologically dependent mosquito-borne disease, the study is intended to identify the ecological covariates of malaria risk in districts of the state and to build a suitable predictive model based on those predictors which could assist developing a weather based early warning system. This secondary data based analysis used one month lagged district level malaria positive cases as response variable and ecological covariates as independent variables which were tested with fixed effect panelled negative binomial regression models. Interactions among the covariates were explored using two way factorial interaction in the model. Although malaria risk in the state possesses perennial characteristics, higher parasitic incidence was observed during the rainy and winter seasons. The univariate analysis indicated that the malaria incidence risk was statistically significant associated with rainfall, maximum humidity, minimum temperature, wind speed, and forest cover ( p  < 0.05). The efficient predictive model include the forest cover [IRR-1.033 (1.024-1.042)], maximum humidity [IRR-1.016 (1.013-1.018)], and two-way factorial interactions between district specific averaged monthly minimum temperature and monthly minimum temperature, monthly minimum temperature was statistically significant [IRR-1.44 (1.231-1.695)] whereas the interaction term has a protective effect [IRR-0.982 (0.974-0.990)] against malaria infections. Forest cover, maximum humidity, minimum temperature and wind speed emerged as potential covariates to be used in predictive models for modelling the malaria risk in the state which could be efficiently used for early warning systems in the state.

  8. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  9. Meteorological risks are drivers of environmental innovation in agro-ecosystem management

    NASA Astrophysics Data System (ADS)

    Gobin, Anne; Van de Vijver, Hans; Vanwindekens, Frédéric; de Frutos Cachorro, Julia; Verspecht, Ann; Planchon, Viviane; Buyse, Jeroen

    2017-04-01

    Agricultural crop production is to a great extent determined by weather conditions. The research hypothesis is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management. The methodology comprised five major parts: the hazard, its impact on different agro-ecosystems, vulnerability, risk management and risk communication. Generalized Extreme Value (GEV) theory was used to model annual maxima of meteorological variables based on a location-, scale- and shape-parameter that determine the center of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Spatial interpolation of GEV-derived return levels resulted in spatial temperature extremes, precipitation deficits and wet periods. The temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was realised using a bio-physically based modelling framework that couples phenology, a soil water balance and crop growth. 20-year return values for drought and waterlogging during different crop stages were related to arable yields. The method helped quantify agricultural production risks and rate both weather and crop-based agricultural insurance. The spatial extent of vulnerability is developed on different layers of geo-information to include meteorology, soil-landscapes, crop cover and management. Vulnerability of agroecosystems was mapped based on rules set by experts' knowledge and implemented by Fuzzy Inference System modelling and Geographical Information System tools. The approach was applied for cropland vulnerability to heavy rain and grassland vulnerability to drought. The level of vulnerability and resilience of an agro-ecosystem was also determined by risk management which differed across sectors and farm types. A calibrated agro-economic model demonstrated a marked influence of climate adapted land allocation and crop management on individual utility. The "chain of risk" approach allowed for investigating the hypothesis that meteorological risks act as drivers for agricultural innovation. Risk types were quantified in terms of probability and distribution, and further distinguished according to production type. Examples of strategies and options were provided at field, farm and policy level using different modelling methods.

  10. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  11. Risk-Based Decision Making in a Scientific Issue: A Study of Teachers Discussing a Dilemma through a Microworld

    ERIC Educational Resources Information Center

    Levinson, Ralph; Kent, Phillip; Pratt, David; Kapadia, Ramesh; Yogui, Cristina

    2012-01-01

    Risk has now become a feature of science curricula in many industrialized countries. While risk is conceptualized within a number of different theoretical frameworks, the predominant model used in examination specifications is a utility model in which risk calculations are deemed to be objective through technical expert assessment and where the…

  12. Latent Model Analysis of Substance Use and HIV Risk Behaviors among High-Risk Minority Adults

    ERIC Educational Resources Information Center

    Wang, Min Qi; Matthew, Resa F.; Chiu, Yu-Wen; Yan, Fang; Bellamy, Nikki D.

    2007-01-01

    Objectives: This study evaluated substance use and HIV risk profile using a latent model analysis based on ecological theory, inclusive of a risk and protective factor framework, in sexually active minority adults (N=1,056) who participated in a federally funded substance abuse and HIV prevention health initiative from 2002 to 2006. Methods: Data…

  13. Merging universal and indicated prevention programs: the Fast Track model. Conduct Problems Prevention Research Group.

    PubMed

    2000-01-01

    Fast Track is a multisite, multicomponent preventive intervention for young children at high risk for long-term antisocial behavior. Based on a comprehensive developmental model, this intervention includes a universal-level classroom program plus social-skill training, academic tutoring, parent training, and home visiting to improve competencies and reduce problems in a high-risk group of children selected in kindergarten. The theoretical principles and clinical strategies utilized in the Fast Track Project are described to illustrate the interplay between basic developmental research, the understanding of risk and protective factors, and a research-based model of preventive intervention that integrates universal and indicated models of prevention.

  14. NATIONAL-SCALE ASSESSMENT OF AIR TOXICS RISKS ...

    EPA Pesticide Factsheets

    The national-scale assessment of air toxics risks is a modeling assessment which combines emission inventory development, atmospheric fate and transport modeling, exposure modeling, and risk assessment to characterize the risk associated with inhaling air toxics from outdoor sources. This national-scale effort will be initiated for the base year 1996 and repeated every three years thereafter to track trends and inform program development. Provide broad-scale understanding of inhalation risks for a subset of atmospherically-emitted air toxics to inform further data-gathering efforts and priority-setting for the EPA's Air Toxics Programs.

  15. Projecting School Psychology Staffing Needs Using a Risk-Adjusted Model.

    ERIC Educational Resources Information Center

    Stellwagen, Kurt

    A model is proposed to project optimal school psychology service ratios based upon the percentages of at risk students enrolled within a given school population. Using the standard 1:1,000 service ratio advocated by The National Association of School Psychologists (NASP) as a starting point, ratios are then adjusted based upon the size of three…

  16. Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. Th...

  17. Rift Valley fever risk map model and seroprevalence in selected wild ungulates and camels from Kenya

    USDA-ARS?s Scientific Manuscript database

    Since the first isolation of Rift Valley fever virus (RVFV) in the 1930s, there have been multiple epizootics and epidemics in animals and humans in sub-Saharan Africa. Prospective climate-based models have recently been developed that flag areas at risk of RVFV transmission in endemic regions based...

  18. Violent reinjury risk assessment instrument (VRRAI) for hospital-based violence intervention programs.

    PubMed

    Kramer, Erik J; Dodington, James; Hunt, Ava; Henderson, Terrell; Nwabuo, Adaobi; Dicker, Rochelle; Juillard, Catherine

    2017-09-01

    Violent injury is the second most common cause of death among 15- to 24-year olds in the US. Up to 58% of violently injured youth return to the hospital with a second violent injury. Hospital-based violence intervention programs (HVIPs) have been shown to reduce injury recidivism through intensive case management. However, no validated guidelines for risk assessment strategies in the HVIP setting have been reported. We aimed to use qualitative methods to investigate the key components of risk assessments employed by HVIP case managers and to propose a risk assessment model based on this qualitative analysis. An established academic hospital-affiliated HVIP served as the nexus for this research. Thematic saturation was reached with 11 semi-structured interviews and two focus groups conducted with HVIP case managers and key informants identified through snowball sampling. Interactions were analyzed by a four-member team using Nvivo 10, employing the constant comparison method. Risk factors identified were used to create a set of models presented in two follow-up HVIP case managers and leadership focus groups. Eighteen key themes within seven domains (environment, identity, mental health, behavior, conflict, indicators of lower risk, and case management) and 141 potential risk factors for use in the risk assessment framework were identified. The most salient factors were incorporated into eight models that were presented to the HVIP case managers. A 29-item algorithmic structured professional judgment model was chosen. We identified four tiers of risk factors for violent reinjury that were incorporated into a proposed risk assessment instrument, VRRAI. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The European Thoracic Surgery Database project: modelling the risk of in-hospital death following lung resection.

    PubMed

    Berrisford, Richard; Brunelli, Alessandro; Rocco, Gaetano; Treasure, Tom; Utley, Martin

    2005-08-01

    To identify pre-operative factors associated with in-hospital mortality following lung resection and to construct a risk model that could be used prospectively to inform decisions and retrospectively to enable fair comparisons of outcomes. Data were submitted to the European Thoracic Surgery Database from 27 units in 14 countries. We analysed data concerning all patients that had a lung resection. Logistic regression was used with a random sample of 60% of cases to identify pre-operative factors associated with in-hospital mortality and to build a model of risk. The resulting model was tested on the remaining 40% of patients. A second model based on age and ppoFEV1% was developed for risk of in-hospital death amongst tumour resection patients. Of the 3426 adult patients that had a first lung resection for whom mortality data were available, 66 died within the same hospital admission. Within the data used for model development, dyspnoea (according to the Medical Research Council classification), ASA (American Society of Anaesthesiologists) score, class of procedure and age were found to be significantly associated with in-hospital death in a multivariate analysis. The logistic model developed on these data displayed predictive value when tested on the remaining data. Two models of the risk of in-hospital death amongst adult patients undergoing lung resection have been developed. The models show predictive value and can be used to discern between high-risk and low-risk patients. Amongst the test data, the model developed for all diagnoses performed well at low risk, underestimated mortality at medium risk and overestimated mortality at high risk. The second model for resection of lung neoplasms was developed after establishing the performance of the first model and so could not be tested robustly. That said, we were encouraged by its performance over the entire range of estimated risk. The first of these two models could be regarded as an evaluation based on clinically available criteria while the second uses data obtained from objective measurement. We are optimistic that further model development and testing will provide a tool suitable for case mix adjustment.

  20. Spatial analysis of plague in California: niche modeling predictions of the current distribution and potential response to climate change

    PubMed Central

    Holt, Ashley C; Salkeld, Daniel J; Fritz, Curtis L; Tucker, James R; Gong, Peng

    2009-01-01

    Background Plague, caused by the bacterium Yersinia pestis, is a public and wildlife health concern in California and the western United States. This study explores the spatial characteristics of positive plague samples in California and tests Maxent, a machine-learning method that can be used to develop niche-based models from presence-only data, for mapping the potential distribution of plague foci. Maxent models were constructed using geocoded seroprevalence data from surveillance of California ground squirrels (Spermophilus beecheyi) as case points and Worldclim bioclimatic data as predictor variables, and compared and validated using area under the receiver operating curve (AUC) statistics. Additionally, model results were compared to locations of positive and negative coyote (Canis latrans) samples, in order to determine the correlation between Maxent model predictions and areas of plague risk as determined via wild carnivore surveillance. Results Models of plague activity in California ground squirrels, based on recent climate conditions, accurately identified case locations (AUC of 0.913 to 0.948) and were significantly correlated with coyote samples. The final models were used to identify potential plague risk areas based on an ensemble of six future climate scenarios. These models suggest that by 2050, climate conditions may reduce plague risk in the southern parts of California and increase risk along the northern coast and Sierras. Conclusion Because different modeling approaches can yield substantially different results, care should be taken when interpreting future model predictions. Nonetheless, niche modeling can be a useful tool for exploring and mapping the potential response of plague activity to climate change. The final models in this study were used to identify potential plague risk areas based on an ensemble of six future climate scenarios, which can help public managers decide where to allocate surveillance resources. In addition, Maxent model results were significantly correlated with coyote samples, indicating that carnivore surveillance programs will continue to be important for tracking the response of plague to future climate conditions. PMID:19558717

  1. Usefulness of cancer-free survival in estimating the lifetime attributable risk of cancer incidence from radiation exposure.

    PubMed

    Seo, Songwon; Lee, Dal Nim; Jin, Young Woo; Lee, Won Jin; Park, Sunhoo

    2018-05-11

    Risk projection models estimating the lifetime cancer risk from radiation exposure are generally based on exposure dose, age at exposure, attained age, gender and study-population-specific factors such as baseline cancer risks and survival rates. Because such models have mostly been based on the Life Span Study cohort of Japanese atomic bomb survivors, the baseline risks and survival rates in the target population should be considered when applying the cancer risk. The survival function used in the risk projection models that are commonly used in the radiological protection field to estimate the cancer risk from medical or occupational exposure is based on all-cause mortality. Thus, it may not be accurate for estimating the lifetime risk of high-incidence but not life-threatening cancer with a long-term survival rate. Herein, we present the lifetime attributable risk (LAR) estimates of all solid cancers except thyroid cancer, thyroid cancer, and leukemia except chronic lymphocytic leukemia in South Korea for lifetime exposure to 1 mGy per year using the cancer-free survival function, as recently applied in the Fukushima health risk assessment by the World Health Organization. Compared with the estimates of LARs using an overall survival function solely based on all-cause mortality, the LARs of all solid cancers except thyroid cancer, and thyroid cancer evaluated using the cancer-free survival function, decreased by approximately 13% and 1% for men and 9% and 5% for women, respectively. The LAR of leukemia except chronic lymphocytic leukemia barely changed for either gender owing to the small absolute difference between its incidence and mortality. Given that many cancers have a high curative rate and low mortality rate, using a survival function solely based on all-cause mortality may cause an overestimation of the lifetime risk of cancer incidence. The lifetime fractional risk was robust against the choice of survival function.

  2. Multisensor satellite data for water quality analysis and water pollution risk assessment: decision making under deep uncertainty with fuzzy algorithm in framework of multimodel approach

    NASA Astrophysics Data System (ADS)

    Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim

    2017-10-01

    Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.

  3. The Genetics Panel of the NAS BEAR I Committee (1956): epistolary evidence suggests self-interest may have prompted an exaggeration of radiation risks that led to the adoption of the LNT cancer risk assessment model.

    PubMed

    Calabrese, Edward J

    2014-09-01

    This paper extends a series of historical papers which demonstrated that the linear-no-threshold (LNT) model for cancer risk assessment was founded on ideological-based scientific deceptions by key radiation genetics leaders. Based on an assessment of recently uncovered personal correspondence, it is shown that some members of the United States (US) National Academy of Sciences (NAS) Biological Effects of Atomic Radiation I (BEAR I) Genetics Panel were motivated by self-interest to exaggerate risks to promote their science and personal/professional agenda. Such activities have profound implications for public policy and may have had a significant impact on the adoption of the LNT model for cancer risk assessment.

  4. Risk analysis based on hazards interactions

    NASA Astrophysics Data System (ADS)

    Rossi, Lauro; Rudari, Roberto; Trasforini, Eva; De Angeli, Silvia; Becker, Joost

    2017-04-01

    Despite an increasing need for open, transparent, and credible multi-hazard risk assessment methods, models, and tools, the availability of comprehensive risk information needed to inform disaster risk reduction is limited, and the level of interaction across hazards is not systematically analysed. Risk assessment methodologies for different hazards often produce risk metrics that are not comparable. Hazard interactions (consecutive occurrence two or more different events) are generally neglected, resulting in strongly underestimated risk assessment in the most exposed areas. This study presents cases of interaction between different hazards, showing how subsidence can affect coastal and river flood risk (Jakarta and Bandung, Indonesia) or how flood risk is modified after a seismic event (Italy). The analysis of well documented real study cases, based on a combination between Earth Observation and in-situ data, would serve as basis the formalisation of a multi-hazard methodology, identifying gaps and research frontiers. Multi-hazard risk analysis is performed through the RASOR platform (Rapid Analysis and Spatialisation Of Risk). A scenario-driven query system allow users to simulate future scenarios based on existing and assumed conditions, to compare with historical scenarios, and to model multi-hazard risk both before and during an event (www.rasor.eu).

  5. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2016-02-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable individual vessel accident risk levels and shoreline contamination risk from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - the Portuguese continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time, or as an alternative, a correction factor based on vessel distance from coast. Shoreline risks can be computed in real time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns ("hot spots") or developing sensitivity analysis to specific conditions, whereas real-time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  6. A quantitative assessment of risks of heavy metal residues in laundered shop towels and their use by workers.

    PubMed

    Connor, Kevin; Magee, Brian

    2014-10-01

    This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Analyzing systemic risk using non-linear marginal expected shortfall and its minimum spanning tree

    NASA Astrophysics Data System (ADS)

    Song, Jae Wook; Ko, Bonggyun; Chang, Woojin

    2018-02-01

    The aim of this paper is to propose a new theoretical framework for analyzing the systemic risk using the marginal expected shortfall (MES) and its correlation-based minimum spanning tree (MST). At first, we develop two parametric models of MES with their closed-form solutions based on the Capital Asset Pricing Model. Our models are derived from the non-symmetric quadratic form, which allows them to consolidate the non-linear relationship between the stock and market returns. Secondly, we discover the evidences related to the utility of our models and the possible association in between the non-linear relationship and the emergence of severe systemic risk by considering the US financial system as a benchmark. In this context, the evolution of MES also can be regarded as a reasonable proxy of systemic risk. Lastly, we analyze the structural properties of the systemic risk using the MST based on the computed series of MES. The topology of MST conveys the presence of sectoral clustering and strong co-movements of systemic risk leaded by few hubs during the crisis. Specifically, we discover that the Depositories are the majority sector leading the connections during the Non-Crisis period, whereas the Broker-Dealers are majority during the Crisis period.

  8. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  9. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  10. Improved performance of epidemiologic and genetic risk models for rheumatoid arthritis serologic phenotypes using family history

    PubMed Central

    Sparks, Jeffrey A.; Chen, Chia-Yen; Jiang, Xia; Askling, Johan; Hiraki, Linda T.; Malspeis, Susan; Klareskog, Lars; Alfredsson, Lars; Costenbader, Karen H.; Karlson, Elizabeth W.

    2014-01-01

    Objective To develop and validate rheumatoid arthritis (RA) risk models based on family history, epidemiologic factors, and known genetic risk factors. Methods We developed and validated models for RA based on known RA risk factors, among women in two cohorts: the Nurses’ Health Study (NHS, 381 RA cases and 410 controls) and the Epidemiological Investigation of RA (EIRA, 1244 RA cases and 971 controls). Model discrimination was evaluated using the area under the receiver operating characteristic curve (AUC) in logistic regression models for the study population and for those with positive family history. The joint effect of family history with genetics, smoking, and body mass index (BMI) was evaluated using logistic regression models to estimate odds ratios (OR) for RA. Results The complete model including family history, epidemiologic risk factors, and genetics demonstrated AUCs of 0.74 for seropositive RA in NHS and 0.77 for anti-citrullinated protein antibody (ACPA)-positive RA in EIRA. Among women with positive family history, discrimination was excellent for complete models for seropositive RA in NHS (AUC 0.82) and ACPA-positive RA in EIRA (AUC 0.83). Positive family history, high genetic susceptibility, smoking, and increased BMI had an OR of 21.73 for ACPA-positive RA. Conclusions We developed models for seropositive and seronegative RA phenotypes based on family history, epidemiologic and genetic factors. Among those with positive family history, models utilizing epidemiologic and genetic factors were highly discriminatory for seropositive and seronegative RA. Assessing epidemiological and genetic factors among those with positive family history may identify individuals suitable for RA prevention strategies. PMID:24685909

  11. Developing a suitable model for supplier selection based on supply chain risks: an empirical study from Iranian pharmaceutical companies.

    PubMed

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts' opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry.

  12. Developing a Suitable Model for Supplier Selection Based on Supply Chain Risks: An Empirical Study from Iranian Pharmaceutical Companies

    PubMed Central

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts’ opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry. PMID:24250442

  13. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  14. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  15. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  16. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    PubMed

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value <0.25 in the univariate analysis were further evaluated in multivariate models using backward elimination. Discrimination was assessed using receiver operating curve. Calibration was evaluated using the McFadden's R2. Net reclassification index (NRI) associated with incorporation of laboratory results was calculated. Results were internally validated. A model similar to existing CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  17. Physics-based Entry, Descent and Landing Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  18. Effects of the Communities That Care Model in Pennsylvania on Change in Adolescent Risk and Problem Behaviors

    PubMed Central

    Jones, Damon; Greenberg, Mark T.; Osgood, D. Wayne; Bontempo, Daniel

    2015-01-01

    Despite the public health burden of adolescent substance use, delinquency, and other problem behavior, few comprehensive models of disseminating evidence-based prevention programs to communities have demonstrated positive youth outcomes at a population level, capacity to maintain program fidelity, and sustainability. We examined whether the Communities That Care (CTC; Hawkins and Catalano 1992) model had a positive impact on risk/protective factors and academic and behavioral outcomes among adolescents in a quasi-experimental effectiveness study. We conducted a longitudinal study of CTC in Pennsylvania utilizing biannual surveillance data collected through anonymous in-school student surveys. We utilized multilevel models to examine CTC impact on change in risk/protective factors, grades, delinquency, and substance use over time. Youth in CTC communities demonstrated less growth in delinquency, but not substance use, than youth in non-CTC communities. Levels of risk factors increased more slowly, and protective factors and academic performance decreased more slowly, among CTC community grade-cohorts that were exposed to evidence-based, universal prevention programs than comparison grade cohorts. Community coalitions can affect adolescent risk and protective behaviors at a population level when evidence-based programs are utilized. CTC represents an effective model for disseminating such programs. PMID:20020209

  19. Radiation Hormesis: Historical Perspective and Implications for Low-Dose Cancer Risk Assessment

    PubMed Central

    Vaiserman, Alexander M.

    2010-01-01

    Current guidelines for limiting exposure of humans to ionizing radiation are based on the linear-no-threshold (LNT) hypothesis for radiation carcinogenesis under which cancer risk increases linearly as the radiation dose increases. With the LNT model even a very small dose could cause cancer and the model is used in establishing guidelines for limiting radiation exposure of humans. A slope change at low doses and dose rates is implemented using an empirical dose and dose rate effectiveness factor (DDREF). This imposes usually unacknowledged nonlinearity but not a threshold in the dose-response curve for cancer induction. In contrast, with the hormetic model, low doses of radiation reduce the cancer incidence while it is elevated after high doses. Based on a review of epidemiological and other data for exposure to low radiation doses and dose rates, it was found that the LNT model fails badly. Cancer risk after ordinarily encountered radiation exposure (medical X-rays, natural background radiation, etc.) is much lower than projections based on the LNT model and is often less than the risk for spontaneous cancer (a hormetic response). Understanding the mechanistic basis for hormetic responses will provide new insights about both risks and benefits from low-dose radiation exposure. PMID:20585444

  20. Web-based decision support system to predict risk level of long term rice production

    NASA Astrophysics Data System (ADS)

    Mukhlash, Imam; Maulidiyah, Ratna; Sutikno; Setiyono, Budi

    2017-09-01

    Appropriate decision making in risk management of rice production is very important in agricultural planning, especially for Indonesia which is an agricultural country. Good decision would be obtained if the supporting data required are satisfied and using appropriate methods. This study aims to develop a Decision Support System that can be used to predict the risk level of rice production in some districts which are central of rice production in East Java. Web-based decision support system is constructed so that the information can be easily accessed and understood. Components of the system are data management, model management, and user interface. This research uses regression models of OLS and Copula. OLS model used to predict rainfall while Copula model used to predict harvested area. Experimental results show that the models used are successfully predict the harvested area of rice production in some districts which are central of rice production in East Java at any given time based on the conditions and climate of a region. Furthermore, it can predict the amount of rice production with the level of risk. System generates prediction of production risk level in the long term for some districts that can be used as a decision support for the authorities.

  1. Laboratory-based and office-based risk scores and charts to predict 10-year risk of cardiovascular disease in 182 countries: a pooled analysis of prospective cohorts and health surveys.

    PubMed

    Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz

    2017-03-01

    Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40-74 years. Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40-64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40-64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. National Institutes of Health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Laboratory-based and office-based risk scores and charts to predict 10-year risk of cardiovascular disease in 182 countries: a pooled analysis of prospective cohorts and health surveys

    PubMed Central

    Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz

    2017-01-01

    Summary Background Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40–74 years. Methods Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40–64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Findings Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40–64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Interpretation Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. PMID:28126460

  3. How Statisticians Speak Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redus, K.S.

    2007-07-01

    The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less

  4. WRF-based fire risk modelling and evaluation for years 2010 and 2012 in Poland

    NASA Astrophysics Data System (ADS)

    Stec, Magdalena; Szymanowski, Mariusz; Kryza, Maciej

    2016-04-01

    Wildfires are one of the main ecosystems' disturbances for forested, seminatural and agricultural areas. They generate significant economic loss, especially in forest management and agriculture. Forest fire risk modeling is therefore essential e.g. for forestry administration. In August 2015 a new method of forest fire risk forecasting entered into force in Poland. The method allows to predict a fire risk level in a 4-degree scale (0 - no risk, 3 - highest risk) and consists of a set of linearized regression equations. Meteorological information is used as predictors in regression equations, with air temperature, relative humidity, average wind speed, cloudiness and rainfall. The equations include also pine litter humidity as a measure of potential fuel characteristics. All these parameters are measured routinely in Poland at 42 basic and 94 auxiliary sites. The fire risk level is estimated for a current (basing on morning measurements) or next day (basing on midday measurements). Entire country is divided into 42 prognostic zones, and fire risk level for each zone is taken from the closest measuring site. The first goal of this work is to assess if the measurements needed for fire risk forecasting may be replaced by the data from mesoscale meteorological model. Additionally, the use of a meteorological model would allow to take into account much more realistic spatial differentiation of weather elements determining the fire risk level instead of discrete point-made measurements. Meteorological data have been calculated using the Weather Research and Forecasting model (WRF). For the purpose of this study the WRF model is run in the reanalysis mode allowing to estimate all required meteorological data in a 5-kilometers grid. The only parameter that cannot be directly calculated using WRF is the litter humidity, which has been estimated using empirical formula developed by Sakowska (2007). The experiments are carried out for two selected years: 2010 and 2012. The year 2010 was characterized by the smallest number of wildfires and burnt area whereas 2012 - by the biggest number of fires and the largest area of conflagration. The data about time, localization, scale and causes of individual wildfire occurrence in given years are taken from the National Forest Fire Information System (KSIPL), administered by Forest Fire Protection Department of Polish Forest Research Institute. The database is a part of European Forest Fire Information System (EFFIS). Basing on this data and on the WRF-based fire risk modelling we intend to achieve the second goal of the study, which is the evaluation of the forecasted fire risk with an occurrence of wildfires. Special attention is paid here to the number, time and the spatial distribution of wildfires occurred in cases of low-level predicted fire risk. Results obtained reveals the effectiveness of the new forecasting method. The outcome of our investigation allows to draw a conclusion that some adjustments are possible to improve the efficiency on the fire-risk estimation method.

  5. Source preference and ambiguity aversion: models and evidence from behavioral and neuroimaging experiments.

    PubMed

    Chew, Soo Hong; Li, King King; Chark, Robin; Zhong, Songfa

    2008-01-01

    This experimental economics study using brain imaging techniques investigates the risk-ambiguity distinction in relation to the source preference hypothesis (Fox & Tversky, 1995) in which identically distributed risks arising from different sources of uncertainty may engender distinct preferences for the same decision maker, contrary to classical economic thinking. The use of brain imaging enables sharper testing of the implications of different models of decision-making including Chew and Sagi's (2008) axiomatization of source preference. Using fMRI, brain activations were observed when subjects make 48 sequential binary choices among even-chance lotteries based on whether the trailing digits of a number of stock prices at market closing would be odd or even. Subsequently, subjects rate familiarity of the stock symbols. When contrasting brain activation from more familiar sources with those from less familiar ones, regions appearing to be more active include the putamen, medial frontal cortex, and superior temporal gyrus. ROI analysis showed that the activation patterns in the familiar-unfamiliar and unfamiliar-familiar contrasts are similar to those in the risk-ambiguity and ambiguity-risk contrasts reported by Hsu et al. (2005). This supports the conjecture that the risk-ambiguity distinction can be subsumed by the source preference hypothesis. Our odd-even design has the advantage of inducing the same "unambiguous" probability of half for each subject in each binary comparison. Our finding supports the implications of the Chew-Sagi model and rejects models based on global probabilistic sophistication, including rank-dependent models derived from non-additive probabilities, e.g., Choquet expected utility and cumulative prospect theory, as well as those based on multiple priors, e.g., alpha-maxmin. The finding in Hsu et al. (2005) that orbitofrontal cortex lesion patients display neither ambiguity aversion nor risk aversion offers further support to the Chew-Sagi model. Our finding also supports the Levy et al. (2007) contention of a single valuation system encompassing risk and ambiguity aversion. This is the first neuroimaging study of the source preference hypothesis using a design which can discriminate among decision models ranging from risk-based ones to those relying on multiple priors.

  6. Mapping environmental susceptibility to Saint Louis encephalitis virus, based on a decision tree model of remotely-sensed data.

    PubMed

    Rotela, Camilo H; Spinsanti, Lorena I; Lamfri, Mario A; Contigiani, Marta S; Almirón, Walter R; Scavuzzo, Carlos M

    2011-11-01

    In response to the first human outbreak (January May 2005) of Saint Louis encephalitis (SLE) virus in Córdoba province, Argentina, we developed an environmental SLE virus risk map for the capital, i.e. Córdoba city. The aim was to provide a map capable of detecting macro-environmental factors associated with the spatial distribution of SLE cases, based on remotely sensed data and a geographical information system. Vegetation, soil brightness, humidity status, distances to water-bodies and areas covered by vegetation were assessed based on pre-outbreak images provided by the Landsat 5TM satellite. A strong inverse relationship between the number of humans infected by SLEV and distance to high-vigor vegetation was noted. A statistical non-hierarchic decision tree model was constructed, based on environmental variables representing the areas surrounding patient residences. From this point of view, 18% of the city could be classified as being at high risk for SLEV infection, while 34% carried a low risk, or none at all. Taking the whole 2005 epidemic into account, 80% of the cases came from areas classified by the model as medium-high or high risk. Almost 46% of the cases were registered in high-risk areas, while there were no cases (0%) in areas affirmed as risk free.

  7. Which risk models perform best in selecting ever-smokers for lung cancer screening?

    Cancer.gov

    A new analysis by scientists at NCI evaluates nine different individualized lung cancer risk prediction models based on their selections of ever-smokers for computed tomography (CT) lung cancer screening.

  8. Theory-Based Cartographic Risk Model Development and Application for Home Fire Safety.

    PubMed

    Furmanek, Stephen; Lehna, Carlee; Hanchette, Carol

    There is a gap in the use of predictive risk models to identify areas at risk for home fires and burn injury. The purpose of this study was to describe the creation, validation, and application of such a model using a sample from an intervention study with parents of newborns in Jefferson County, KY, as an example. Performed was a literature search to identify risk factors for home fires and burn injury in the target population. Obtained from the American Community Survey at the census tract level and synthesized to create a predictive cartographic risk model was risk factor data. Model validation was performed through correlation, regression, and Moran's I with fire incidence data from open records. Independent samples t-tests were used to examine the model in relation to geocoded participant addresses. Participant risk level for fire rate was determined and proximity to fire station service areas and hospitals. The model showed high and severe risk clustering in the northwest section of the county. Strongly correlated with fire rate was modeled risk; the best predictive model for fire risk contained home value (low), race (black), and non high school graduates. Applying the model to the intervention sample, the majority of participants were at lower risk and mostly within service areas closest to a fire department and hospital. Cartographic risk models were useful in identifying areas at risk and analyzing participant risk level. The methods outlined in this study are generalizable to other public health issues.

  9. Multidimensional family therapy HIV/STD risk-reduction intervention: an integrative family-based model for drug-involved juvenile offenders.

    PubMed

    Marvel, Francoise; Rowe, Cynthia L; Colon-Perez, Lissette; DiClemente, Ralph J; Liddle, Howard A

    2009-03-01

    Drug and juvenile justice involved youths show remarkably high rates of human immunodeficiency virus (HIV)/sexually transmitted disease (STD) risk behaviors. However, existing interventions aimed at reducing adolescent HIV risk behavior have rarely targeted these vulnerable young adolescents, and many approaches focus on individual-level change without attention to family or contextual influences. We describe a new, family-based HIV/ STD prevention model that embeds HIV/STD focused multifamily groups within an adolescent drug abuse and delinquency evidence-based treatment, Multidimensional Family Therapy (MDFT). The approach has been evaluated in a multisite randomized clinical trial with juvenile justice involved youths in the National Institute on Drug Abuse Criminal Justice Drug Abuse Treatment Studies (www.cjdats.org). Preliminary baseline to 6-month outcomes are promising. We describe research on family risk and protective factors for adolescent problem behaviors, and offer a rationale for family-based approaches to reduce HIV/STD risk in this population. We describe the development and implementation of the Multidimensional Family Therapy HIV/STD risk-reduction intervention (MDFT-HIV/ STD) in terms of using multifamily groups and their integration in standard MDFT and also offers a clinical vignette. The potential significance of this empirically based intervention development work is high; MDFT-HIV/STD is the first model to address largely unmet HIV/STD prevention and sexual health needs of substance abusing juvenile offenders within the context of a family-oriented evidence-based intervention.

  10. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  11. The Integrated Medical Model - A Risk Assessment and Decision Support Tool for Human Space Flight Missions

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles G.; Saile, Lynn; FreiredeCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma

    2010-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission planners and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight.

  12. Security risk assessment: applying the concepts of fuzzy logic.

    PubMed

    Bajpai, Shailendra; Sachdeva, Anish; Gupta, J P

    2010-01-15

    Chemical process industries (CPI) handling hazardous chemicals in bulk can be attractive targets for deliberate adversarial actions by terrorists, criminals and disgruntled employees. It is therefore imperative to have comprehensive security risk management programme including effective security risk assessment techniques. In an earlier work, it has been shown that security risk assessment can be done by conducting threat and vulnerability analysis or by developing Security Risk Factor Table (SRFT). HAZOP type vulnerability assessment sheets can be developed that are scenario based. In SRFT model, important security risk bearing factors such as location, ownership, visibility, inventory, etc., have been used. In this paper, the earlier developed SRFT model has been modified using the concepts of fuzzy logic. In the modified SRFT model, two linguistic fuzzy scales (three-point and four-point) are devised based on trapezoidal fuzzy numbers. Human subjectivity of different experts associated with previous SRFT model is tackled by mapping their scores to the newly devised fuzzy scale. Finally, the fuzzy score thus obtained is defuzzyfied to get the results. A test case of a refinery is used to explain the method and compared with the earlier work.

  13. Risk-Based Approach for Microbiological Food Safety Management in the Dairy Industry: The Case of Listeria monocytogenes in Soft Cheese Made from Pasteurized Milk.

    PubMed

    Tenenhaus-Aziza, Fanny; Daudin, Jean-Jacques; Maffre, Alexandre; Sanaa, Moez

    2014-01-01

    According to Codex Alimentarius Commission recommendations, management options applied at the process production level should be based on good hygiene practices, HACCP system, and new risk management metrics such as the food safety objective. To follow this last recommendation, the use of quantitative microbiological risk assessment is an appealing approach to link new risk-based metrics to management options that may be applied by food operators. Through a specific case study, Listeria monocytogenes in soft cheese made from pasteurized milk, the objective of the present article is to practically show how quantitative risk assessment could be used to direct potential intervention strategies at different food processing steps. Based on many assumptions, the model developed estimates the risk of listeriosis at the moment of consumption taking into account the entire manufacturing process and potential sources of contamination. From pasteurization to consumption, the amplification of a primo-contamination event of the milk, the fresh cheese or the process environment is simulated, over time, space, and between products, accounting for the impact of management options, such as hygienic operations and sampling plans. A sensitivity analysis of the model will help orientating data to be collected prioritarily for the improvement and the validation of the model. What-if scenarios were simulated and allowed for the identification of major parameters contributing to the risk of listeriosis and the optimization of preventive and corrective measures. © 2013 Society for Risk Analysis.

  14. A Spatial Framework to Map Heat Health Risks at Multiple Scales.

    PubMed

    Ho, Hung Chak; Knudby, Anders; Huang, Wei

    2015-12-18

    In the last few decades extreme heat events have led to substantial excess mortality, most dramatically in Central Europe in 2003, in Russia in 2010, and even in typically cool locations such as Vancouver, Canada, in 2009. Heat-related morbidity and mortality is expected to increase over the coming centuries as the result of climate-driven global increases in the severity and frequency of extreme heat events. Spatial information on heat exposure and population vulnerability may be combined to map the areas of highest risk and focus mitigation efforts there. However, a mismatch in spatial resolution between heat exposure and vulnerability data can cause spatial scale issues such as the Modifiable Areal Unit Problem (MAUP). We used a raster-based model to integrate heat exposure and vulnerability data in a multi-criteria decision analysis, and compared it to the traditional vector-based model. We then used the Getis-Ord G(i) index to generate spatially smoothed heat risk hotspot maps from fine to coarse spatial scales. The raster-based model allowed production of maps at spatial resolution, more description of local-scale heat risk variability, and identification of heat-risk areas not identified with the vector-based approach. Spatial smoothing with the Getis-Ord G(i) index produced heat risk hotspots from local to regional spatial scale. The approach is a framework for reducing spatial scale issues in future heat risk mapping, and for identifying heat risk hotspots at spatial scales ranging from the block-level to the municipality level.

  15. Development of the Methodology Needed to Quantify Risks to Groundwater at CO2 Storage Sites

    NASA Astrophysics Data System (ADS)

    Brown, C. F.; Birkholzer, J. T.; Carroll, S.; Hakala, A.; Keating, E. H.; Lopano, C. L.; Newell, D. L.; Spycher, N.

    2011-12-01

    The National Risk Assessment Partnership (NRAP) is an effort that harnesses capabilities across five U.S. Department of Energy (DOE) national laboratories into a mission-focused platform to develop a defensible, science-based quantitative methodology for determining risk profiles at CO2 storage sites. NRAP is conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling. The mission of NRAP is "to provide the scientific underpinning for risk assessment with respect to the long-term storage of CO2, including assessment of residual risk associated with a site post-closure." Additionally, NRAP will develop a strategic, risk-based monitoring protocol, such that monitoring at all stages of a project effectively minimizes uncertainty in the predicted behavior of the site, thereby increasing confidence in storage integrity. NRAP's research focus in the area of groundwater protection is divided into three main tasks: 1) development of quantitative risk profiles for potential groundwater impacts; 2) filling key science gaps in developing those risk profiles; and 3) field-based confirmation. Within these three tasks, researchers are engaged in collaborative studies to determine metrics to identify system perturbation and their associated risk factors. Reservoir simulations are being performed to understand/predict consequences of hypothetical leakage scenarios, from which reduced order models are being developed to feed risk profile development. Both laboratory-based experiments and reactive transport modeling studies provide estimates of geochemical impacts over a broad range of leakage scenarios. This presentation will provide an overview of the research objectives within NRAP's groundwater protection focus area, as well as select accomplishments achieved to date.

  16. Intelligent ship traffic monitoring for oil spill prevention: risk based decision support building on AIS.

    PubMed

    Eide, Magnus S; Endresen, Oyvind; Brett, Per Olaf; Ervik, Jon Leon; Røang, Kjell

    2007-02-01

    The paper describes a model, which estimates the risk levels of individual crude oil tankers. The intended use of the model, which is ready for trial implementation at The Norwegian Coastal Administrations new Vardø VTS (Vessel Traffic Service) centre, is to facilitate the comparison of ships and to support a risk based decision on which ships to focus attention on. For a VTS operator, tasked with monitoring hundreds of ships, this is a valuable decision support tool. The model answers the question, "Which ships are likely to produce an oil spill accident, and how much is it likely to spill?".

  17. Breast cancer screening in an era of personalized regimens: a conceptual model and National Cancer Institute initiative for risk-based and preference-based approaches at a population level.

    PubMed

    Onega, Tracy; Beaber, Elisabeth F; Sprague, Brian L; Barlow, William E; Haas, Jennifer S; Tosteson, Anna N A; D Schnall, Mitchell; Armstrong, Katrina; Schapira, Marilyn M; Geller, Berta; Weaver, Donald L; Conant, Emily F

    2014-10-01

    Breast cancer screening holds a prominent place in public health, health care delivery, policy, and women's health care decisions. Several factors are driving shifts in how population-based breast cancer screening is approached, including advanced imaging technologies, health system performance measures, health care reform, concern for "overdiagnosis," and improved understanding of risk. Maximizing benefits while minimizing the harms of screening requires moving from a "1-size-fits-all" guideline paradigm to more personalized strategies. A refined conceptual model for breast cancer screening is needed to align women's risks and preferences with screening regimens. A conceptual model of personalized breast cancer screening is presented herein that emphasizes key domains and transitions throughout the screening process, as well as multilevel perspectives. The key domains of screening awareness, detection, diagnosis, and treatment and survivorship are conceptualized to function at the level of the patient, provider, facility, health care system, and population/policy arena. Personalized breast cancer screening can be assessed across these domains with both process and outcome measures. Identifying, evaluating, and monitoring process measures in screening is a focus of a National Cancer Institute initiative entitled PROSPR (Population-based Research Optimizing Screening through Personalized Regimens), which will provide generalizable evidence for a risk-based model of breast cancer screening, The model presented builds on prior breast cancer screening models and may serve to identify new measures to optimize benefits-to-harms tradeoffs in population-based screening, which is a timely goal in the era of health care reform. © 2014 American Cancer Society.

  18. Finding Groups Using Model-based Cluster Analysis: Heterogeneous Emotional Self-regulatory Processes and Heavy Alcohol Use Risk

    PubMed Central

    Mun, Eun-Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2010-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of non-nested models using the Bayesian Information Criterion (BIC) to compare multiple models and identify the optimum number of clusters. The current study clustered 36 young men and women based on their baseline heart rate (HR) and HR variability (HRV), chronic alcohol use, and reasons for drinking. Two cluster groups were identified and labeled High Alcohol Risk and Normative groups. Compared to the Normative group, individuals in the High Alcohol Risk group had higher levels of alcohol use and more strongly endorsed disinhibition and suppression reasons for use. The High Alcohol Risk group showed significant HRV changes in response to positive and negative emotional and appetitive picture cues, compared to neutral cues. In contrast, the Normative group showed a significant HRV change only to negative cues. Findings suggest that the individuals with autonomic self-regulatory difficulties may be more susceptible to heavy alcohol use and use alcohol for emotional regulation. PMID:18331138

  19. Use of an ecologically relevant modelling approach to improve remote sensing-based schistosomiasis risk profiling.

    PubMed

    Walz, Yvonne; Wegmann, Martin; Leutner, Benjamin; Dech, Stefan; Vounatsou, Penelope; N'Goran, Eliézer K; Raso, Giovanna; Utzinger, Jürg

    2015-11-30

    Schistosomiasis is a widespread water-based disease that puts close to 800 million people at risk of infection with more than 250 million infected, mainly in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and the frequency, duration and extent of human bodies exposed to infested water sources during human water contact. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. Since schistosomiasis risk profiling based on remote sensing data inherits a conceptual drawback if school-based disease prevalence data are directly related to the remote sensing measurements extracted at the location of the school, because the disease transmission usually does not exactly occur at the school, we took the local environment around the schools into account by explicitly linking ecologically relevant environmental information of potential disease transmission sites to survey measurements of disease prevalence. Our models were validated at two sites with different landscapes in Côte d'Ivoire using high- and moderate-resolution remote sensing data based on random forest and partial least squares regression. We found that the ecologically relevant modelling approach explained up to 70% of the variation in Schistosoma infection prevalence and performed better compared to a purely pixel-based modelling approach. Furthermore, our study showed that model performance increased as a function of enlarging the school catchment area, confirming the hypothesis that suitable environments for schistosomiasis transmission rarely occur at the location of survey measurements.

  20. 12 CFR Appendix B to Part 3 - Risk-Based Capital Guidelines; Market Risk Adjustment

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... management systems at least annually. (c) Market risk factors. The bank's internal model must use risk factors sufficient to measure the market risk inherent in all covered positions. The risk factors must... risk weighting factor indicated in Table 2 of this appendix. The specific risk capital charge component...

  1. A Contemporary Prostate Biopsy Risk Calculator Based on Multiple Heterogeneous Cohorts.

    PubMed

    Ankerst, Donna P; Straubinger, Johanna; Selig, Katharina; Guerrios, Lourdes; De Hoedt, Amanda; Hernandez, Javier; Liss, Michael A; Leach, Robin J; Freedland, Stephen J; Kattan, Michael W; Nam, Robert; Haese, Alexander; Montorsi, Francesco; Boorjian, Stephen A; Cooperberg, Matthew R; Poyet, Cedric; Vertosick, Emily; Vickers, Andrew J

    2018-05-16

    Prostate cancer prediction tools provide quantitative guidance for doctor-patient decision-making regarding biopsy. The widely used online Prostate Cancer Prevention Trial Risk Calculator (PCPTRC) utilized data from the 1990s based on six-core biopsies and outdated grading systems. We prospectively gathered data from men undergoing prostate biopsy in multiple diverse North American and European institutions participating in the Prostate Biopsy Collaborative Group (PBCG) in order to build a state-of-the-art risk prediction tool. We obtained data from 15 611 men undergoing 16 369 prostate biopsies during 2006-2017 at eight North American institutions for model-building and three European institutions for validation. We used multinomial logistic regression to estimate the risks of high-grade prostate cancer (Gleason score ≥7) on biopsy based on clinical characteristics, including age, prostate-specific antigen, digital rectal exam, African ancestry, first-degree family history, and prior negative biopsy. We compared the PBCG model to the PCPTRC using internal cross-validation and external validation on the European cohorts. Cross-validation on the North American cohorts (5992 biopsies) yielded the PBCG model area under the receiver operating characteristic curve (AUC) as 75.5% (95% confidence interval: 74.2-76.8), a small improvement over the AUC of 72.3% (70.9-73.7) for the PCPTRC (p<0.0001). However, calibration and clinical net benefit were far superior for the PBCG model. Using a risk threshold of 10%, clinical use of the PBCG model would lead to the equivalent of 25 fewer biopsies per 1000 patients without missing any high-grade cancers. Results were similar on external validation on 10 377 European biopsies. The PBCG model should be used in place of the PCPTRC for prediction of prostate biopsy outcome. A contemporary risk tool for outcomes on prostate biopsy based on the routine clinical risk factors is now available for informed decision-making. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  2. Cognitive mapping tools: review and risk management needs.

    PubMed

    Wood, Matthew D; Bostrom, Ann; Bridges, Todd; Linkov, Igor

    2012-08-01

    Risk managers are increasingly interested in incorporating stakeholder beliefs and other human factors into the planning process. Effective risk assessment and management requires understanding perceptions and beliefs of involved stakeholders, and how these beliefs give rise to actions that influence risk management decisions. Formal analyses of risk manager and stakeholder cognitions represent an important first step. Techniques for diagramming stakeholder mental models provide one tool for risk managers to better understand stakeholder beliefs and perceptions concerning risk, and to leverage this new understanding in developing risk management strategies. This article reviews three methodologies for assessing and diagramming stakeholder mental models--decision-analysis-based mental modeling, concept mapping, and semantic web analysis--and assesses them with regard to their ability to address risk manager needs. © 2012 Society for Risk Analysis.

  3. Efficient identification and referral of low-income women at high risk for hereditary breast cancer: a practice-based approach.

    PubMed

    Joseph, G; Kaplan, C; Luce, J; Lee, R; Stewart, S; Guerra, C; Pasick, R

    2012-01-01

    Identification of low-income women with the rare but serious risk of hereditary cancer and their referral to appropriate services presents an important public health challenge. We report the results of formative research to reach thousands of women for efficient identification of those at high risk and expedient access to free genetic services. External validity is maximized by emphasizing intervention fit with the two end-user organizations who must connect to make this possible. This study phase informed the design of a subsequent randomized controlled trial. We conducted a randomized controlled pilot study (n = 38) to compare two intervention models for feasibility and impact. The main outcome was receipt of genetic counseling during a two-month intervention period. Model 1 was based on the usual outcall protocol of an academic hospital genetic risk program, and Model 2 drew on the screening and referral procedures of a statewide toll-free phone line through which large numbers of high-risk women can be identified. In Model 1, the risk program proactively calls patients to schedule genetic counseling; for Model 2, women are notified of their eligibility for counseling and make the call themselves. We also developed and pretested a family history screener for administration by phone to identify women appropriate for genetic counseling. There was no statistically significant difference in receipt of genetic counseling between women randomized to Model 1 (3/18) compared with Model 2 (3/20) during the intervention period. However, when unresponsive women in Model 2 were called after 2 months, 7 more obtained counseling; 4 women from Model 1 were also counseled after the intervention. Thus, the intervention model that closely aligned with the risk program's outcall to high-risk women was found to be feasible and brought more low-income women to free genetic counseling. Our screener was easy to administer by phone and appeared to identify high-risk callers effectively. The model and screener are now in use in the main trial to test the effectiveness of this screening and referral intervention. A validation analysis of the screener is also underway. Identification of intervention strategies and tools, and their systematic comparison for impact and efficiency in the context where they will ultimately be used are critical elements of practice-based research. Copyright © 2012 S. Karger AG, Basel.

  4. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    PubMed

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. The Relative Importance of the Vadose Zone in Multimedia Risk Assessment Modeling Applied at a National Scale: An Analysis of Benzene Using 3MRA

    NASA Astrophysics Data System (ADS)

    Babendreier, J. E.

    2002-05-01

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidable one, particularly in regulatory settings applied on a national scale. Quantitative assessment of uncertainty and sensitivity within integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a systematic, comparative approach coupled with sufficient computational power. The Multimedia, Multipathway, and Multireceptor Risk Assessment Model (3MRA) is an important code being developed by the United States Environmental Protection Agency for use in site-scale risk assessment (e.g. hazardous waste management facilities). The model currently entails over 700 variables, 185 of which are explicitly stochastic. The 3MRA can start with a chemical concentration in a waste management unit (WMU). It estimates the release and transport of the chemical throughout the environment, and predicts associated exposure and risk. The 3MRA simulates multimedia (air, water, soil, sediments), pollutant fate and transport, multipathway exposure routes (food ingestion, water ingestion, soil ingestion, air inhalation, etc.), multireceptor exposures (resident, gardener, farmer, fisher, ecological habitats and populations), and resulting risk (human cancer and non-cancer effects, ecological population and community effects). The 3MRA collates the output for an overall national risk assessment, offering a probabilistic strategy as a basis for regulatory decisions. To facilitate model execution of 3MRA for purposes of conducting uncertainty and sensitivity analysis, a PC-based supercomputer cluster was constructed. Design of SuperMUSE, a 125 GHz Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described, along with the conceptual layout of an accompanying java-based paralleling software toolset. Preliminary work is also reported for a scenario involving Benzene disposal that describes the relative importance of the vadose zone in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 national facilities representing 419 site-WMU combinations.

  6. Risk of fetal mortality after exposure to Listeria monocytogenes based on dose-response data from pregnant guinea pigs and primates.

    PubMed

    Williams, Denita; Castleman, Jennifer; Lee, Chi-Ching; Mote, Beth; Smith, Mary Alice

    2009-11-01

    One-third of the annual cases of listeriosis in the United States occur during pregnancy and can lead to miscarriage or stillbirth, premature delivery, or infection of the newborn. Previous risk assessments completed by the Food and Drug Administration/the Food Safety Inspection Service of the U.S. Department of Agriculture/the Centers for Disease Control and Prevention (FDA/USDA/CDC) and Food and Agricultural Organization/the World Health Organization (FAO/WHO) were based on dose-response data from mice. Recent animal studies using nonhuman primates and guinea pigs have both estimated LD(50)s of approximately 10(7) Listeria monocytogenes colony forming units (cfu). The FAO/WHO estimated a human LD(50) of 1.9 x 10(6) cfu based on data from a pregnant woman consuming contaminated soft cheese. We reevaluated risk based on dose-response curves from pregnant rhesus monkeys and guinea pigs. Using standard risk assessment methodology including hazard identification, exposure assessment, hazard characterization, and risk characterization, risk was calculated based on the new dose-response information. To compare models, we looked at mortality rate per serving at predicted doses ranging from 10(-4) to 10(12) L. monocytogenes cfu. Based on a serving of 10(6) L. monocytogenes cfu, the primate model predicts a death rate of 5.9 x 10(-1) compared to the FDA/USDA/CDC (fig. IV-12) predicted rate of 1.3 x 10(-7). Based on the guinea pig and primate models, the mortality rate calculated by the FDA/USDA/CDC is underestimated for this susceptible population.

  7. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  8. Practice management based on risk assessment.

    PubMed

    Sandberg, Hans

    2004-01-01

    The management of a dental practice is most often focused on what clinicians do (production of items), and not so much on what is achieved in terms of oral health. The main reason for this is probably that it is easier to measure production and more difficult to measure health outcome. This paper presents a model based on individual risk assessment that aims to achieve a financially sound economy and good oral health. The close-to-the-clinic management tool, the HIDEP Model (Health Improvement in a DEntal Practice) was pioneered initially in Sweden at the end of 1980s. The experience over a 15-year period with different elements of the model is presented, including: the basis of examination and risk assessment; motivation; task delegation and leadership issues; health-finance evaluations; and quality development within a dental clinic. DentiGroupXL, a software program designed to support the work based on the model, is also described.

  9. Synthetic biology between challenges and risks: suggestions for a model of governance and a regulatory framework, based on fundamental rights.

    PubMed

    Colussi, Ilaria Anna

    2013-01-01

    This paper deals with the emerging synthetic biology, its challenges and risks, and tries to design a model for the governance and regulation of the field. The model is called of "prudent vigilance" (inspired by the report about synthetic biology, drafted by the U.S. Presidential Commission on Bioethics, 2010), and it entails (a) an ongoing and periodically revised process of assessment and management of all the risks and concerns, and (b) the adoption of policies - taken through "hard law" and "soft law" sources - that are based on the principle of proportionality (among benefits and risks), on a reasonable balancing between different interests and rights at stake, and are oriented by a constitutional frame, which is represented by the protection of fundamental human rights emerging in the field of synthetic biology (right to life, right to health, dignity, freedom of scientific research, right to environment). After the theoretical explanation of the model, its operability is "checked", by considering its application with reference to only one specific risk brought up by synthetic biology - biosecurity risk, i.e. the risk of bioterrorism.

  10. Validation of a predictive model that identifies patients at high risk of developing febrile neutropaenia following chemotherapy for breast cancer.

    PubMed

    Jenkins, P; Scaife, J; Freeman, S

    2012-07-01

    We have previously developed a predictive model that identifies patients at increased risk of febrile neutropaenia (FN) following chemotherapy, based on pretreatment haematological indices. This study was designed to validate our earlier findings in a separate cohort of patients undergoing more myelosuppressive chemotherapy supported by growth factors. We conducted a retrospective analysis of 263 patients who had been treated with adjuvant docetaxel, adriamycin and cyclophosphamide (TAC) chemotherapy for breast cancer. All patients received prophylactic pegfilgrastim and the majority also received prophylactic antibiotics. Thirty-one patients (12%) developed FN. Using our previous model, patients in the highest risk group (pretreatment absolute neutrophil count≤3.1 10(9)/l and absolute lymphocyte count≤1.5 10(9)/l) comprised 8% of the total population and had a 33% risk of developing FN. Compared with the rest of the cohort, this group had a 3.4-fold increased risk of developing FN (P=0.001) and a 5.2-fold increased risk of cycle 1 FN (P<0.001). A simple model based on pretreatment differential white blood cell count can be applied to pegfilgrastim-supported patients to identify those who are at higher risk of FN.

  11. Improving the evidence base for services working with youth at-risk of involvement in the criminal justice system: developing a standardised program approach.

    PubMed

    Knight, Alice; Maple, Myfanwy; Shakeshaft, Anthony; Shakehsaft, Bernie; Pearce, Tania

    2018-04-16

    Young people who engage in multiple risk behaviour (high-risk young people) such as substance abuse, antisocial behaviour, low engagement in education and employment, self-harm or suicide ideation are more likely to experience serious harms later in life including homelessness, incarceration, violence and premature death. In addition to personal disadvantage, these harms represent an avoidable social and economic cost to society. Despite these harms, there is insufficient evidence about how to improve outcomes for high-risk young people. A key reason for this is a lack of standardisation in the way in which programs provided by services are defined and evaluated. This paper describes the development of a standardised intervention model for high-risk young people. The model can be used by service providers to achieve greater standardisation across their programs, outcomes and outcome measures. To demonstrate its feasibility, the model is applied to an existing program for high-risk young people. The development and uptake of a standardised intervention model for these programs will help to more rapidly develop a larger and more rigorous evidence-base to improve outcomes for high-risk young people.

  12. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait--a cohort study.

    PubMed

    Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse

    2013-05-14

    We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Incident type 2 diabetes, hypertension and comorbidity. Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign 'high' risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned 'low' risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case-control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait.

  13. Leadership of risk decision making in a complex, technology organization: The deliberative decision making model

    NASA Astrophysics Data System (ADS)

    Flaming, Susan C.

    2007-12-01

    The continuing saga of satellite technology development is as much a story of successful risk management as of innovative engineering. How do program leaders on complex, technology projects manage high stakes risks that threaten business success and satellite performance? This grounded theory study of risk decision making portrays decision leadership practices at one communication satellite company. Integrated product team (IPT) leaders of multi-million dollar programs were interviewed and observed to develop an extensive description of the leadership skills required to navigate organizational influences and drive challenging risk decisions to closure. Based on the study's findings the researcher proposes a new decision making model, Deliberative Decision Making, to describe the program leaders' cognitive and organizational leadership practices. This Deliberative Model extends the insights of prominent decision making models including the rational (or classical) and the naturalistic and qualifies claims made by bounded rationality theory. The Deliberative Model describes how leaders proactively engage resources to play a variety of decision leadership roles. The Model incorporates six distinct types of leadership decision activities, undertaken in varying sequence based on the challenges posed by specific risks. Novel features of the Deliberative Decision Model include: an inventory of leadership methods for managing task challenges, potential stakeholder bias and debates; four types of leadership meta-decisions that guide decision processes, and aligned organizational culture. Both supporting and constraining organizational influences were observed as leaders managed major risks, requiring active leadership on the most difficult decisions. Although the company's engineering culture emphasized the importance of data-based decisions, the uncertainties intrinsic to satellite risks required expert engineering judgment to be exercised throughout. An investigation into the co-variation of decision methods with uncertainty suggests that perceived risk severity may serve as a robust indicator for choices about decision practices. The Deliberative Decision processes incorporate multiple organizational and cultural controls as cross-checks to mitigate potential parochial bias of individuals, stakeholder groups, or leaders. Overall the Deliberative Decision framework describes how expert leadership practices, supportive organizational systems along with aligned cultural values and behavioral norms help leaders drive high stakes risk decisions to closure in this complex, advanced-technology setting.

  14. Holistic flood risk assessment using agent-based modelling: the case of Sint Maarten Island

    NASA Astrophysics Data System (ADS)

    Abayneh Abebe, Yared; Vojinovic, Zoran; Nikolic, Igor; Hammond, Michael; Sanchez, Arlex; Pelling, Mark

    2015-04-01

    Floods in coastal regions are regarded as one of the most dangerous and harmful disasters. Though commonly referred to as natural disasters, coastal floods are also attributable to various social, economic, historical and political issues. Rapid urbanisation in coastal areas combined with climate change and poor governance can lead to a significant increase in the risk of pluvial flooding coinciding with fluvial and coastal flooding posing a greater risk of devastation in coastal communities. Disasters that can be triggered by hydro-meteorological events are interconnected and interrelated with both human activities and natural processes. They, therefore, require holistic approaches to help understand their complexity in order to design and develop adaptive risk management approaches that minimise social and economic losses and environmental impacts, and increase resilience to such events. Being located in the North Atlantic Ocean, Sint Maarten is frequently subjected to hurricanes. In addition, the stormwater catchments and streams on Sint Maarten have several unique characteristics that contribute to the severity of flood-related impacts. Urban environments are usually situated in low-lying areas, with little consideration for stormwater drainage, and as such are subject to flash flooding. Hence, Sint Maarten authorities drafted policies to minimise the risk of flood-related disasters on the island. In this study, an agent-based model is designed and applied to understand the implications of introduced policies and regulations, and to understand how different actors' behaviours influence the formation, propagation and accumulation of flood risk. The agent-based model built for this study is based on the MAIA meta-model, which helps to decompose, structure and conceptualize socio-technical systems with an agent-oriented perspective, and is developed using the NetLogo simulation environment. The agents described in this model are households and businesses, and policies on spatial planning rules are implemented. Preliminary results demonstrate the evolving nature of flood risks and describe the effectiveness of different planning policies to reduce risk and increase resilience.

  15. Development of a risk-based environmental management tool for drilling discharges. Summary of a four-year project.

    PubMed

    Singsaas, Ivar; Rye, Henrik; Frost, Tone Karin; Smit, Mathijs G D; Garpestad, Eimund; Skare, Ingvild; Bakke, Knut; Veiga, Leticia Falcao; Buffagni, Melania; Follum, Odd-Arne; Johnsen, Ståle; Moltu, Ulf-Einar; Reed, Mark

    2008-04-01

    This paper briefly summarizes the ERMS project and presents the developed model by showing results from environmental fates and risk calculations of a discharge from offshore drilling operations. The developed model calculates environmental risks for the water column and sediments resulting from exposure to toxic stressors (e.g., chemicals) and nontoxic stressors (e.g., suspended particles, sediment burial). The approach is based on existing risk assessment techniques described in the European Union technical guidance document on risk assessment and species sensitivity distributions. The model calculates an environmental impact factor, which characterizes the overall potential impact on the marine environment in terms of potentially impacted water volume and sediment area. The ERMS project started in 2003 and was finalized in 2007. In total, 28 scientific reports and 9 scientific papers have been delivered from the ERMS project (http://www.sintef.no/erms).

  16. A DISCUSSION ON DIFFERENT APPROACHES FOR ASSESSING LIFETIME RISKS OF RADON-INDUCED LUNG CANCER.

    PubMed

    Chen, Jing; Murith, Christophe; Palacios, Martha; Wang, Chunhong; Liu, Senlin

    2017-11-01

    Lifetime risks of radon induced lung cancer were assessed based on epidemiological approaches for Canadian, Swiss and Chinese populations, using the most recent vital statistic data and radon distribution characteristics available for each country. In the risk calculation, the North America residential radon risk model was used for the Canadian population, the European residential radon risk model for the Swiss population, the Chinese residential radon risk model for the Chinese population, and the EPA/BEIR-VI radon risk model for all three populations. The results were compared with the risk calculated from the International Commission on Radiological Protection (ICRP)'s exposure-to-risk conversion coefficients. In view of the fact that the ICRP coefficients were recommended for radiation protection of all populations, it was concluded that, generally speaking, lifetime absolute risks calculated with ICRP-recommended coefficients agree reasonably well with the range of radon induced lung cancer risk predicted by risk models derived from epidemiological pooling analyses. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Rank-based methods for modeling dependence between loss triangles.

    PubMed

    Côté, Marie-Pier; Genest, Christian; Abdallah, Anas

    2016-01-01

    In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.

  18. Ecological risk assessment conceptual model formulation for nonindigenous species.

    PubMed

    Landis, Wayne G

    2004-08-01

    This article addresses the application of ecological risk assessment at the regional scale to the prediction of impacts due to invasive or nonindigenous species (NIS). The first section describes risk assessment, the decision-making process, and introduces regional risk assessment. A general conceptual model for the risk assessment of NIS is then presented based upon the regional risk assessment approach. Two diverse examples of the application of this approach are presented. The first example is based upon the dynamics of introduced plasmids into bacteria populations. The second example is the application risk assessment approach to the invasion of a coastal marine site of Cherry Point, Washington, USA by the European green crab. The lessons learned from the two examples demonstrate that assessment of the risks of invasion of NIS will have to incorporate not only the characteristics of the invasive species, but also the other stresses and impacts affecting the region of interest.

  19. Risk Assessment in Underground Coalmines Using Fuzzy Logic in the Presence of Uncertainty

    NASA Astrophysics Data System (ADS)

    Tripathy, Debi Prasad; Ala, Charan Kumar

    2018-04-01

    Fatal accidents are occurring every year as regular events in Indian coal mining industry. To increase the safety conditions, it has become a prerequisite to performing a risk assessment of various operations in mines. However, due to uncertain accident data, it is hard to conduct a risk assessment in mines. The object of this study is to present a method to assess safety risks in underground coalmines. The assessment of safety risks is based on the fuzzy reasoning approach. Mamdani fuzzy logic model is developed in the fuzzy logic toolbox of MATLAB. A case study is used to demonstrate the applicability of the developed model. The summary of risk evaluation in case study mine indicated that mine fire has the highest risk level among all the hazard factors. This study could help the mine management to prepare safety measures based on the risk rankings obtained.

  20. A method for determining weights for excess relative risk and excess absolute risk when applied in the calculation of lifetime risk of cancer from radiation exposure.

    PubMed

    Walsh, Linda; Schneider, Uwe

    2013-03-01

    Radiation-related risks of cancer can be transported from one population to another population at risk, for the purpose of calculating lifetime risks from radiation exposure. Transfer via excess relative risks (ERR) or excess absolute risks (EAR) or a mixture of both (i.e., from the life span study (LSS) of Japanese atomic bomb survivors) has been done in the past based on qualitative weighting. Consequently, the values of the weights applied and the method of application of the weights (i.e., as additive or geometric weighted means) have varied both between reports produced at different times by the same regulatory body and also between reports produced at similar times by different regulatory bodies. Since the gender and age patterns are often markedly different between EAR and ERR models, it is useful to have an evidence-based method for determining the relative goodness of fit of such models to the data. This paper identifies a method, using Akaike model weights, which could aid expert judgment and be applied to help to achieve consistency of approach and quantitative evidence-based results in future health risk assessments. The results of applying this method to recent LSS cancer incidence models are that the relative EAR weighting by cancer solid cancer site, on a scale of 0-1, is zero for breast and colon, 0.02 for all solid, 0.03 for lung, 0.08 for liver, 0.15 for thyroid, 0.18 for bladder and 0.93 for stomach. The EAR weighting for female breast cancer increases from 0 to 0.3, if a generally observed change in the trend between female age-specific breast cancer incidence rates and attained age, associated with menopause, is accounted for in the EAR model. Application of this method to preferred models from a study of multi-model inference from many models fitted to the LSS leukemia mortality data, results in an EAR weighting of 0. From these results it can be seen that lifetime risk transfer is most highly weighted by EAR only for stomach cancer. However, the generalization and interpretation of radiation effect estimates based on the LSS cancer data, when projected to other populations, are particularly uncertain if considerable differences exist between site-specific baseline rates in the LSS and the other populations of interest. Definitive conclusions, regarding the appropriate method for transporting cancer risks, are limited by a lack of knowledge in several areas including unknown factors and uncertainties in biological mechanisms and genetic and environmental risk factors for carcinogenesis; uncertainties in radiation dosimetry; and insufficient statistical power and/or incomplete follow-up in data from radio-epidemiological studies.

  1. Predictive Modeling of Risk Associated with Temperature Extremes over Continental US

    NASA Astrophysics Data System (ADS)

    Kravtsov, S.; Roebber, P.; Brazauskas, V.

    2016-12-01

    We build an extremely statistically accurate, essentially bias-free empirical emulator of atmospheric surface temperature and apply it for meteorological risk assessment over the domain of continental US. The resulting prediction scheme achieves an order-of-magnitude or larger gain of numerical efficiency compared with the schemes based on high-resolution dynamical atmospheric models, leading to unprecedented accuracy of the estimated risk distributions. The empirical model construction methodology is based on our earlier work, but is further modified to account for the influence of large-scale, global climate change on regional US weather and climate. The resulting estimates of the time-dependent, spatially extended probability of temperature extremes over the simulation period can be used as a risk management tool by insurance companies and regulatory governmental agencies.

  2. Risk, individual differences, and environment: an Agent-Based Modeling approach to sexual risk-taking.

    PubMed

    Nagoski, Emily; Janssen, Erick; Lohrmann, David; Nichols, Eric

    2012-08-01

    Risky sexual behaviors, including the decision to have unprotected sex, result from interactions between individuals and their environment. The current study explored the use of Agent-Based Modeling (ABM)-a methodological approach in which computer-generated artificial societies simulate human sexual networks-to assess the influence of heterogeneity of sexual motivation on the risk of contracting HIV. The models successfully simulated some characteristics of human sexual systems, such as the relationship between individual differences in sexual motivation (sexual excitation and inhibition) and sexual risk, but failed to reproduce the scale-free distribution of number of partners observed in the real world. ABM has the potential to inform intervention strategies that target the interaction between an individual and his or her social environment.

  3. Cyber situation awareness: modeling detection of cyber attacks with instance-based learning theory.

    PubMed

    Dutt, Varun; Ahn, Young-Suk; Gonzalez, Cleotilde

    2013-06-01

    To determine the effects of an adversary's behavior on the defender's accurate and timely detection of network threats. Cyber attacks cause major work disruption. It is important to understand how a defender's behavior (experience and tolerance to threats), as well as adversarial behavior (attack strategy), might impact the detection of threats. In this article, we use cognitive modeling to make predictions regarding these factors. Different model types representing a defender, based on Instance-Based Learning Theory (IBLT), faced different adversarial behaviors. A defender's model was defined by experience of threats: threat-prone (90% threats and 10% nonthreats) and nonthreat-prone (10% threats and 90% nonthreats); and different tolerance levels to threats: risk-averse (model declares a cyber attack after perceiving one threat out of eight total) and risk-seeking (model declares a cyber attack after perceiving seven threats out of eight total). Adversarial behavior is simulated by considering different attack strategies: patient (threats occur late) and impatient (threats occur early). For an impatient strategy, risk-averse models with threat-prone experiences show improved detection compared with risk-seeking models with nonthreat-prone experiences; however, the same is not true for a patient strategy. Based upon model predictions, a defender's prior threat experiences and his or her tolerance to threats are likely to predict detection accuracy; but considering the nature of adversarial behavior is also important. Decision-support tools that consider the role of a defender's experience and tolerance to threats along with the nature of adversarial behavior are likely to improve a defender's overall threat detection.

  4. An example of population-level risk assessments for small mammals using individual-based population models.

    PubMed

    Schmitt, Walter; Auteri, Domenica; Bastiansen, Finn; Ebeling, Markus; Liu, Chun; Luttik, Robert; Mastitsky, Sergey; Nacci, Diane; Topping, Chris; Wang, Magnus

    2016-01-01

    This article presents a case study demonstrating the application of 3 individual-based, spatially explicit population models (IBMs, also known as agent-based models) in ecological risk assessments to predict long-term effects of a pesticide to populations of small mammals. The 3 IBMs each used a hypothetical fungicide (FungicideX) in different scenarios: spraying in cereals (common vole, Microtus arvalis), spraying in orchards (field vole, Microtus agrestis), and cereal seed treatment (wood mouse, Apodemus sylvaticus). Each scenario used existing model landscapes, which differed greatly in size and structural complexity. The toxicological profile of FungicideX was defined so that the deterministic long-term first tier risk assessment would result in high risk to small mammals, thus providing the opportunity to use the IBMs for risk assessment refinement (i.e., higher tier risk assessment). Despite differing internal model design and scenarios, results indicated in all 3 cases low population sensitivity unless FungicideX was applied at very high (×10) rates. Recovery from local population impacts was generally fast. Only when patch extinctions occured in simulations of intentionally high acute toxic effects, recovery periods, then determined by recolonization, were of any concern. Conclusions include recommendations for the most important input considerations, including the selection of exposure levels, duration of simulations, statistically robust number of replicates, and endpoints to report. However, further investigation and agreement are needed to develop recommendations for landscape attributes such as size, structure, and crop rotation to define appropriate regulatory risk assessment scenarios. Overall, the application of IBMs provides multiple advantages to higher tier ecological risk assessments for small mammals, including consistent and transparent direct links to specific protection goals, and the consideration of more realistic scenarios. © 2015 SETAC.

  5. Using toxicokinetic-toxicodynamic modeling as an acute risk assessment refinement approach in vertebrate ecological risk assessment.

    PubMed

    Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel

    2016-01-01

    Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.

  6. An empirical assessment of driver motivation and emotional states in perceived safety margins under varied driving conditions.

    PubMed

    Zhang, Yu; Kaber, David B

    2013-01-01

    Motivation models in driving behaviour postulate that driver motives and emotional states dictate risk tolerance under various traffic conditions. The present study used time and driver performance-based payment systems to manipulate motivation and risk-taking behaviour. Ten participants drove to a predefined location in a simulated driving environment. Traffic patterns (density and velocity) were manipulated to cause driver behaviour adjustments due to the need to conform with the social norms of the roadway. The driving environment complexity was investigated as a mediating factor in risk tolerance. Results revealed the performance-based payment system to closely relate to risk-taking behaviour as compared with the time-based payment system. Drivers conformed with social norms associated with specific traffic patterns. Higher roadway complexity led to a more conservative safety margins and speeds. This research contributes to the further development of motivational models of driver behaviour. This study provides empirical justification for two motivation factors in driver risk-taking decisions, including compliance with social norm and emotions triggered by incentives. Environment complexity was identified as a mediating factor in motivational behaviour model. This study also recommended safety margin measures sensitive to changes in driver risk tolerance.

  7. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  8. Risk-Based School Inspections: Impact of Targeted Inspection Approaches on Dutch Secondary Schools

    ERIC Educational Resources Information Center

    Ehren, Melanie C.; Shackleton, Nichola

    2016-01-01

    In most countries, publicly funded schools are held accountable to one inspectorate and are judged against agreed national standards. Many inspectorates of education have recently moved towards more proportional risk-based inspection models, targeting high-risk schools for visits, while schools with satisfactory student attainment levels are…

  9. Use and Customization of Risk Scores for Predicting Cardiovascular Events Using Electronic Health Record Data.

    PubMed

    Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J

    2017-04-24

    Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  10. RESILIENCE THEORY AND ITS IMPLICATIONS FOR CHINESE ADOLESCENTS.

    PubMed

    Wang, Jin-Liang; Zhang, Da-Jun; Zimmerman, Marc A

    2015-10-01

    Over the past 20 years, resilience theory has attracted great attention from both researchers and mental health practitioners. Resilience is defined as a process of overcoming the negative effects of risk exposure, coping successfully with traumatic experiences, or avoiding the negative trajectories associated with risks. Three basic models of resilience have been proposed to account for the mechanism whereby promotive factors operate to alter the trajectory from risk exposure to negative consequences: compensatory model, protective model, and inoculation model. Assets and resources are two types of promotive factors found to be effective in decreasing internalizing and externalizing problems. Considering the protective or compensatory role of assets and resources in helping youth be resilient against negative effects of adversity, resilience could be applied to Chinese migrant and left-behind children who are at risk for internalizing (e.g., depression, anxiety) and externalizing problems (e.g., delinquent behaviors, cigarette and alcohol use). Additionally, psychological suzhi-based interventions, a mental health construct for individuals that focuses on a strengths-based approach, can be integrated with resilience-based approach to develop more balanced programs for positive youth development.

  11. A software quality model and metrics for risk assessment

    NASA Technical Reports Server (NTRS)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  12. Development and validation of risk models to select ever-smokers for CT lung-cancer screening

    PubMed Central

    Katki, Hormuzd A.; Kovalchik, Stephanie A.; Berg, Christine D.; Cheung, Li C.; Chaturvedi, Anil K.

    2016-01-01

    Importance The US Preventive Services Task Force (USPSTF) recommends computed-tomography (CT) lung-cancer screening for ever-smokers ages 55-80 years who smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung-cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Objective Comparison of modeled outcomes from risk-based CT lung-screening strategies versus USPSTF recommendations. Design/Setting/Participants Empirical risk models for lung-cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age, education, sex, race, smoking intensity/duration/quit-years, Body Mass Index, family history of lung-cancer, and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the US. Models applied to US ever-smokers ages 50-80 (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung-screening, assuming screening for all ever-smokers yields the percent changes in lung-cancer detection and death observed in the NLST. Exposure Annual CT lung-screening for 3 years. Main Outcomes and Measures Model validity: calibration (number of model-predicted cases divided by number of observed cases (Estimated/Observed)) and discrimination (Area-Under-Curve (AUC)). Modeled screening outcomes: estimated number of screen-avertable lung-cancer deaths, estimated screening effectiveness (number needed to screen (NNS) to prevent 1 lung-cancer death). Results Lung-cancer incidence and death risk models were well-calibrated in PLCO and NLST. The lung-cancer death model calibrated and discriminated well for US ever-smokers ages 50-80 (NHIS 1997-2001: Estimated/Observed=0.94, 95%CI=0.84-1.05; AUC=0.78, 95%CI=0.76-0.80). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung-cancer screening and 46,488 (95%CI=43,924-49,053) lung-cancer deaths were estimated as screen-avertable over 5 years (estimated NNS=194, 95%CI=187-201). In contrast, risk-based selection screening the same number of ever-smokers (9.0 million) at highest 5-year lung-cancer risk (≥1.9%), was estimated to avert 20% more deaths (55,717; 95%CI=53,033-58,400) and was estimated to reduce the estimated NNS by 17% (NNS=162, 95%CI=157-166). Conclusions and Relevance Among a cohort of US ever-smokers age 50-80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung-cancer deaths prevented over 5 years along with a lower NNS to prevent 1 lung-cancer death. PMID:27179989

  13. Development and Validation of Risk Models to Select Ever-Smokers for CT Lung Cancer Screening.

    PubMed

    Katki, Hormuzd A; Kovalchik, Stephanie A; Berg, Christine D; Cheung, Li C; Chaturvedi, Anil K

    2016-06-07

    The US Preventive Services Task Force (USPSTF) recommends computed tomography (CT) lung cancer screening for ever-smokers aged 55 to 80 years who have smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Comparison of modeled outcomes from risk-based CT lung-screening strategies vs USPSTF recommendations. Empirical risk models for lung cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age; education; sex; race; smoking intensity, duration, and quit-years; body mass index; family history of lung cancer; and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the United States. Models were applied to US ever-smokers aged 50 to 80 years (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung screening, assuming screening for all ever-smokers, yield the percent changes in lung cancer detection and death observed in the NLST. Annual CT lung screening for 3 years beginning at age 50 years. For model validity: calibration (number of model-predicted cases divided by number of observed cases [estimated/observed]) and discrimination (area under curve [AUC]). For modeled screening outcomes: estimated number of screen-avertable lung cancer deaths and estimated screening effectiveness (number needed to screen [NNS] to prevent 1 lung cancer death). Lung cancer incidence and death risk models were well calibrated in PLCO and NLST. The lung cancer death model calibrated and discriminated well for US ever-smokers aged 50 to 80 years (NHIS 1997-2001: estimated/observed = 0.94 [95%CI, 0.84-1.05]; AUC, 0.78 [95%CI, 0.76-0.80]). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung cancer screening and 46,488 (95% CI, 43,924-49,053) lung cancer deaths were estimated as screen-avertable over 5 years (estimated NNS, 194 [95% CI, 187-201]). In contrast, risk-based selection screening of the same number of ever-smokers (9.0 million) at highest 5-year lung cancer risk (≥1.9%) was estimated to avert 20% more deaths (55,717 [95% CI, 53,033-58,400]) and was estimated to reduce the estimated NNS by 17% (NNS, 162 [95% CI, 157-166]). Among a cohort of US ever-smokers aged 50 to 80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung cancer deaths prevented over 5 years, along with a lower NNS to prevent 1 lung cancer death.

  14. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.

  15. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535

  16. An individual risk prediction model for lung cancer based on a study in a Chinese population.

    PubMed

    Wang, Xu; Ma, Kewei; Cui, Jiuwei; Chen, Xiao; Jin, Lina; Li, Wei

    2015-01-01

    Early detection and diagnosis remains an effective yet challenging approach to improve the clinical outcome of patients with cancer. Low-dose computed tomography screening has been suggested to improve the diagnosis of lung cancer in high-risk individuals. To make screening more efficient, it is necessary to identify individuals who are at high risk. We conducted a case-control study to develop a predictive model for identification of such high-risk individuals. Clinical data from 705 lung cancer patients and 988 population-based controls were used for the development and evaluation of the model. Associations between environmental variants and lung cancer risk were analyzed with a logistic regression model. The predictive accuracy of the model was determined by calculating the area under the receiver operating characteristic curve and the optimal operating point. Our results indicate that lung cancer risk factors included older age, male gender, lower education level, family history of cancer, history of chronic obstructive pulmonary disease, lower body mass index, smoking cigarettes, a diet with less seafood, vegetables, fruits, dairy products, soybean products and nuts, a diet rich in meat, and exposure to pesticides and cooking emissions. The area under the curve was 0.8851 and the optimal operating point was obtained. With a cutoff of 0.35, the false positive rate, true positive rate, and Youden index were 0.21, 0.87, and 0.66, respectively. The risk prediction model for lung cancer developed in this study could discriminate high-risk from low-risk individuals.

  17. [Risk management--a new aspect of quality assessment in intensive care medicine: first results of an analysis of the DIVI's interdisciplinary quality assessment research group].

    PubMed

    Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch

    2006-10-01

    Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.

  18. The Comprehensive Evaluation Method of Supervision Risk in Electricity Transaction Based on Unascertained Rational Number

    NASA Astrophysics Data System (ADS)

    Haining, Wang; Lei, Wang; Qian, Zhang; Zongqiang, Zheng; Hongyu, Zhou; Chuncheng, Gao

    2018-03-01

    For the uncertain problems in the comprehensive evaluation of supervision risk in electricity transaction, this paper uses the unidentified rational numbers to evaluation the supervision risk, to obtain the possible result and corresponding credibility of evaluation and realize the quantification of risk indexes. The model can draw the risk degree of various indexes, which makes it easier for the electricity transaction supervisors to identify the transaction risk and determine the risk level, assisting the decision-making and realizing the effective supervision of the risk. The results of the case analysis verify the effectiveness of the model.

  19. A framework for quantifying net benefits of alternative prognostic models.

    PubMed

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety.

    PubMed

    Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D

    2015-01-01

    Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.

  1. Risk Factors for Addiction and Their Association with Model-Based Behavioral Control.

    PubMed

    Reiter, Andrea M F; Deserno, Lorenz; Wilbertz, Tilmann; Heinze, Hans-Jochen; Schlagenhauf, Florian

    2016-01-01

    Addiction shows familial aggregation and previous endophenotype research suggests that healthy relatives of addicted individuals share altered behavioral and cognitive characteristics with individuals suffering from addiction. In this study we asked whether impairments in behavioral control proposed for addiction, namely a shift from goal-directed, model-based toward habitual, model-free control, extends toward an unaffected sample (n = 20) of adult children of alcohol-dependent fathers as compared to a sample without any personal or family history of alcohol addiction (n = 17). Using a sequential decision-making task designed to investigate model-free and model-based control combined with a computational modeling analysis, we did not find any evidence for altered behavioral control in individuals with a positive family history of alcohol addiction. Independent of family history of alcohol dependence, we however observed that the interaction of two different risk factors of addiction, namely impulsivity and cognitive capacities, predicts the balance of model-free and model-based behavioral control. Post-hoc tests showed a positive association of model-based behavior with cognitive capacity in the lower, but not in the higher impulsive group of the original sample. In an independent sample of particularly high- vs. low-impulsive individuals, we confirmed the interaction effect of cognitive capacities and high vs. low impulsivity on model-based control. In the confirmation sample, a positive association of omega with cognitive capacity was observed in highly impulsive individuals, but not in low impulsive individuals. Due to the moderate sample size of the study, further investigation of the association of risk factors for addiction with model-based behavior in larger sample sizes is warranted.

  2. Developing physical exposure-based back injury risk models applicable to manual handling jobs in distribution centers.

    PubMed

    Lavender, Steven A; Marras, William S; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang

    2012-01-01

    Using our ultrasound-based "Moment Monitor," exposures to biomechanical low back disorder risk factors were quantified in 195 volunteers who worked in 50 different distribution center jobs. Low back injury rates, determined from a retrospective examination of each company's Occupational Safety and Health Administration (OSHA) 300 records over the 3-year period immediately prior to data collection, were used to classify each job's back injury risk level. The analyses focused on the factors differentiating the high-risk jobs (those having had 12 or more back injuries/200,000 hr of exposure) from the low-risk jobs (those defined as having no back injuries in the preceding 3 years). Univariate analyses indicated that measures of load moment exposure and force application could distinguish between high (n = 15) and low (n = 15) back injury risk distribution center jobs. A three-factor multiple logistic regression model capable of predicting high-risk jobs with very good sensitivity (87%) and specificity (73%) indicated that risk could be assessed using the mean across the sampled lifts of the peak forward and or lateral bending dynamic load moments that occurred during each lift, the mean of the peak push/pull forces across the sampled lifts, and the mean duration of the non-load exposure periods. A surrogate model, one that does not require the Moment Monitor equipment to assess a job's back injury risk, was identified although with some compromise in model sensitivity relative to the original model.

  3. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  4. Risk contracting and operational capabilities in large medical groups during national healthcare reform.

    PubMed

    Mechanic, Robert E; Zinner, Darren

    2016-06-01

    Little is known about the scope of alternative payment models outside of Medicare. This study measures the full complement of public and private payment arrangements in large, multi-specialty group practices as a barometer of payment reform among advanced organizations. We collected information from 33 large, multi-specialty group practices about the proportion of their total revenue in 7 payment models, physician compensation strategies, and the implementation of selected performance management initiatives. We grouped respondents into 3 categories based on the proportion of their revenue in risk arrangements: risk-based (45%-100%), mixed (10%-35%), and fee-for-service (FFS) (0%-10%). We analyzed changes in contracting and operating characteristics between 2011 and 2013. In 2013, 68% of groups' total patient revenue was from FFS payments and 32% was from risk arrangements (unweighted average). Risk-based groups had 26% FFS revenue, whereas mixed-payment and FFS groups had 75% and 98%, respectively. Between 2011 and 2013, 9 groups increased risk contract revenue by about 15 percentage points and 22 reported few changes. Risk-based groups reported more advanced implementation of performance management strategies and were more likely to have physician financial incentives for quality and patient experience. The groups in this study are well positioned to manage risk-based contracts successfully, but less than one-third receive a majority of their revenue from risk arrangements. The experience of these relatively advanced groups suggests that expanding risk-based arrangements across the US health system will likely be slower and more challenging than many people assume.

  5. Risk model for estimating the 1-year risk of deferred lesion intervention following deferred revascularization after fractional flow reserve assessment.

    PubMed

    Depta, Jeremiah P; Patel, Jayendrakumar S; Novak, Eric; Gage, Brian F; Masrani, Shriti K; Raymer, David; Facey, Gabrielle; Patel, Yogesh; Zajarias, Alan; Lasala, John M; Amin, Amit P; Kurz, Howard I; Singh, Jasvindar; Bach, Richard G

    2015-02-21

    Although lesions deferred revascularization following fractional flow reserve (FFR) assessment have a low risk of adverse cardiac events, variability in risk for deferred lesion intervention (DLI) has not been previously evaluated. The aim of this study was to develop a prediction model to estimate 1-year risk of DLI for coronary lesions where revascularization was not performed following FFR assessment. A prediction model for DLI was developed from a cohort of 721 patients with 882 coronary lesions where revascularization was deferred based on FFR between 10/2002 and 7/2010. Deferred lesion intervention was defined as any revascularization of a lesion previously deferred following FFR. The final DLI model was developed using stepwise Cox regression and validated using bootstrapping techniques. An algorithm was constructed to predict the 1-year risk of DLI. During a mean (±SD) follow-up period of 4.0 ± 2.3 years, 18% of lesions deferred after FFR underwent DLI; the 1-year incidence of DLI was 5.3%, while the predicted risk of DLI varied from 1 to 40%. The final Cox model included the FFR value, age, current or former smoking, history of coronary artery disease (CAD) or prior percutaneous coronary intervention, multi-vessel CAD, and serum creatinine. The c statistic for the DLI prediction model was 0.66 (95% confidence interval, CI: 0.61-0.70). Patients deferred revascularization based on FFR have variation in their risk for DLI. A clinical prediction model consisting of five clinical variables and the FFR value can help predict the risk of DLI in the first year following FFR assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.

  6. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    PubMed

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  7. Improved performance of epidemiologic and genetic risk models for rheumatoid arthritis serologic phenotypes using family history.

    PubMed

    Sparks, Jeffrey A; Chen, Chia-Yen; Jiang, Xia; Askling, Johan; Hiraki, Linda T; Malspeis, Susan; Klareskog, Lars; Alfredsson, Lars; Costenbader, Karen H; Karlson, Elizabeth W

    2015-08-01

    To develop and validate rheumatoid arthritis (RA) risk models based on family history, epidemiologic factors and known genetic risk factors. We developed and validated models for RA based on known RA risk factors, among women in two cohorts: the Nurses' Health Study (NHS, 381 RA cases and 410 controls) and the Epidemiological Investigation of RA (EIRA, 1244 RA cases and 971 controls). Model discrimination was evaluated using the area under the receiver operating characteristic curve (AUC) in logistic regression models for the study population and for those with positive family history. The joint effect of family history with genetics, smoking and body mass index (BMI) was evaluated using logistic regression models to estimate ORs for RA. The complete model including family history, epidemiologic risk factors and genetics demonstrated AUCs of 0.74 for seropositive RA in NHS and 0.77 for anti-citrullinated protein antibody (ACPA)-positive RA in EIRA. Among women with positive family history, discrimination was excellent for complete models for seropositive RA in NHS (AUC 0.82) and ACPA-positive RA in EIRA (AUC 0.83). Positive family history, high genetic susceptibility, smoking and increased BMI had an OR of 21.73 for ACPA-positive RA. We developed models for seropositive and seronegative RA phenotypes based on family history, epidemiological and genetic factors. Among those with positive family history, models using epidemiologic and genetic factors were highly discriminatory for seropositive and seronegative RA. Assessing epidemiological and genetic factors among those with positive family history may identify individuals suitable for RA prevention strategies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  9. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Non-Member Banks: Market Risk

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... commodity prices. (2) Specific risk means changes in the market value of specific positions due to factors... its risk measurement and risk management systems at least annually. (c) Market risk factors. The bank's internal model must use risk factors sufficient to measure the market risk inherent in all covered...

  10. Residual Risk Assessments

    EPA Science Inventory

    Each source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation. These assesments utilize existing models and data bases to examine the multi-media and multi-...

  11. Validation of a model for ranking aquaculture facilities for risk-based disease surveillance.

    PubMed

    Diserens, Nicolas; Falzon, Laura Cristina; von Siebenthal, Beat; Schüpbach-Regula, Gertraud; Wahli, Thomas

    2017-09-15

    A semi-quantitative model for risk ranking of aquaculture facilities in Switzerland with regard to the introduction and spread of Viral Haemorrhagic Septicaemia (VHS) and Infectious Haematopoietic Necrosis (IHN) was developed in a previous study (Diserens et al., 2013). The objective of the present study was to validate this model using data collected during field visits on aquaculture sites in four Swiss cantons compared to data collected through a questionnaire in the previous study. A discrepancy between the values obtained with the two different methods was found in 32.8% of the parameters, resulting in a significant difference (p<0.001) in the risk classification of the facilities. As data gathered exclusively by means of a questionnaire are not of sufficient quality to perform a risk-based surveillance of aquaculture facilities a combination of questionnaires and farm inspections is proposed. A web-based reporting system could be advantageous for the factors which were identified as being more likely to vary over time, in particular for factors considering fish movements, which showed a marginally significant difference in their risk scores (p≥0.1) within a six- month period. Nevertheless, the model proved to be stable over the considered period of time as no substantial fluctuations in the risk categorisation were observed (Kappa agreement of 0.77).Finally, the model proved to be suitable to deliver a reliable risk ranking of Swiss aquaculture facilities according to their risk of getting infected with or spreading of VHS and IHN, as the five facilities that tested positive for these diseases in the last ten years were ranked as medium or high risk. Moreover, because the seven fish farms that were infected with Infectious Pancreatic Necrosis (IPN) during the same period also belonged to the risk categories medium and high, the classification appeared to correlate with the occurrence of this third viral fish disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  13. Mitigating circumstances: A model-based analysis of associations between risk environment and infrequent condom use among Chinese street-based sex workers.

    PubMed

    Chang, Ruth C; Hail-Jares, Katie; Zheng, Huang; He, Na; Bouey, Jennifer Z H

    2018-01-01

    Little is known about how freelance street-based sex workers navigate condom use while soliciting. Traditional behavioural model may fail to account for the complex risk environment that most street-based sex workers work within. We examine first the association of self-efficacy and the infrequent condom use, then we investigated the roles of clients and venues frequented on this association. Using a purposive chain-referral sampling method, we surveyed 248 street-based sex workers in Shanghai. The survey focused on sex workers HIV risk factors, sex work patterns, HIV knowledge, and related HIV self-efficacy. Clients types and behaviours, and characteristics of the venues frequented by these commercial sex workers were also collected. We conducted a series of multiple logistic regression models to explore how the association between a sex worker's self-efficacy with infrequent condom use change as client and venue characteristics were added to the models. We find that within the basic model, low self-efficacy was marginally associated with infrequent condom use (54.9% vs. 45.1%, AOR = 1.70, 95% CI = 0.95-3.03). As client- and venue- characteristics were added, the associations between self-efficacy and condom use were strengthened (AOR = 2.10 95% CI = 1.12-3.91 and 2.54 95% CI = 1.24-5.19 respectively). Those who reported middle-tiered income were more likely to report infrequent condom use compared to their peers of high income (AOR = 3.92 95% CI = 1.32-11.70) whereas such difference was not found between low income and high income sex workers. Visiting multiple venues and having migrant workers as clients were also associated with infrequent condom use. Our findings suggest sex worker's self-efficacy matters in their HIV risk behaviours only when environment characteristics were adjusted. Risk environment for street-based sex workers are complex. Programming addressing behavioural changes among female sex workers should adopt holistic, multilevel models with the consideration of risk environments.

  14. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  15. Long-Term Post-CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions.

    PubMed

    Carr, Brendan M; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C; Zhu, Wei; Shroyer, A Laurie

    2016-01-01

    Clinical risk models are commonly used to predict short-term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long-term mortality. The added value of long-term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long-term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Long-term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c-index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Mortality rates were 3%, 9%, and 17% at one-, three-, and five years, respectively (median follow-up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long-term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Long-term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long-term mortality risk can be accurately assessed and subgroups of higher-risk patients can be identified for enhanced follow-up care. More research appears warranted to refine long-term CABG clinical risk models. © 2015 The Authors. Journal of Cardiac Surgery Published by Wiley Periodicals, Inc.

  16. Long‐Term Post‐CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions

    PubMed Central

    Carr, Brendan M.; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C.; Zhu, Wei

    2015-01-01

    Abstract Background/aim Clinical risk models are commonly used to predict short‐term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long‐term mortality. The added value of long‐term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long‐term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Methods Long‐term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c‐index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Results Mortality rates were 3%, 9%, and 17% at one‐, three‐, and five years, respectively (median follow‐up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long‐term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Conclusions Long‐term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long‐term mortality risk can be accurately assessed and subgroups of higher‐risk patients can be identified for enhanced follow‐up care. More research appears warranted to refine long‐term CABG clinical risk models. doi: 10.1111/jocs.12665 (J Card Surg 2016;31:23–30) PMID:26543019

  17. Development and applications of the Veterans Health Administration's Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide.

    PubMed

    Oliva, Elizabeth M; Bowe, Thomas; Tavakoli, Sara; Martins, Susana; Lewis, Eleanor T; Paik, Meenah; Wiechers, Ilse; Henderson, Patricia; Harvey, Michael; Avoundjian, Tigran; Medhanie, Amanuel; Trafton, Jodie A

    2017-02-01

    Concerns about opioid-related adverse events, including overdose, prompted the Veterans Health Administration (VHA) to launch an Opioid Safety Initiative and Overdose Education and Naloxone Distribution program. To mitigate risks associated with opioid prescribing, a holistic approach that takes into consideration both risk factors (e.g., dose, substance use disorders) and risk mitigation interventions (e.g., urine drug screening, psychosocial treatment) is needed. This article describes the Stratification Tool for Opioid Risk Mitigation (STORM), a tool developed in VHA that reflects this holistic approach and facilitates patient identification and monitoring. STORM prioritizes patients for review and intervention according to their modeled risk for overdose/suicide-related events and displays risk factors and risk mitigation interventions obtained from VHA electronic medical record (EMR)-data extracts. Patients' estimated risk is based on a predictive risk model developed using fiscal year 2010 (FY2010: 10/1/2009-9/30/2010) EMR-data extracts and mortality data among 1,135,601 VHA patients prescribed opioid analgesics to predict risk for an overdose/suicide-related event in FY2011 (2.1% experienced an event). Cross-validation was used to validate the model, with receiver operating characteristic curves for the training and test data sets performing well (>.80 area under the curve). The predictive risk model distinguished patients based on risk for overdose/suicide-related adverse events, allowing for identification of high-risk patients and enrichment of target populations of patients with greater safety concerns for proactive monitoring and application of risk mitigation interventions. Results suggest that clinical informatics can leverage EMR-extracted data to identify patients at-risk for overdose/suicide-related events and provide clinicians with actionable information to mitigate risk. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Modeling and managing risk early in software development

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Thomas, William M.; Hetmanski, Christopher J.

    1993-01-01

    In order to improve the quality of the software development process, we need to be able to build empirical multivariate models based on data collectable early in the software process. These models need to be both useful for prediction and easy to interpret, so that remedial actions may be taken in order to control and optimize the development process. We present an automated modeling technique which can be used as an alternative to regression techniques. We show how it can be used to facilitate the identification and aid the interpretation of the significant trends which characterize 'high risk' components in several Ada systems. Finally, we evaluate the effectiveness of our technique based on a comparison with logistic regression based models.

  19. USE OF BIOLOGICALLY BASED COMPUTATIONAL MODELING IN MODE OF ACTION-BASED RISK ASSESSMENT – AN EXAMPLE OF CHLOROFORM

    EPA Science Inventory

    The objective of current work is to develop a new cancer dose-response assessment for chloroform using a physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) model. The PBPK/PD model is based on a mode of action in which the cytolethality of chloroform occurs when the ...

  20. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  1. Research on Liquidity Risk Evaluation of Chinese A-Shares Market Based on Extension Theory

    NASA Astrophysics Data System (ADS)

    Bai-Qing, Sun; Peng-Xiang, Liu; Lin, Zhang; Yan-Ge, Li

    This research defines the liquidity risk of stock market in matter-element theory and affair-element theory, establishes the indicator system of the forewarning for liquidity risks,designs the model and the process of early warning using the extension set method, extension dependent function and the comprehensive evaluation model. And the paper studies empirically A-shares market through the data of 1A0001, which prove that the model can better describe liquidity risk of China’s A-share market. At last, it gives the corresponding policy recommendations.

  2. IT Operational Risk Measurement Model Based on Internal Loss Data of Banks

    NASA Astrophysics Data System (ADS)

    Hao, Xiaoling

    Business operation of banks relies increasingly on information technology (IT) and the most important role of IT is to guarantee the operational continuity of business process. Therefore, IT Risk management efforts need to be seen from the perspective of operational continuity. Traditional IT risk studies focused on IT asset-based risk analysis and risk-matrix based qualitative risk evaluation. In practice, IT risk management practices of banking industry are still limited to the IT department and aren't integrated into business risk management, which causes the two departments to work in isolation. This paper presents an improved methodology for dealing with IT operational risk. It adopts quantitative measurement method, based on the internal business loss data about IT events, and uses Monte Carlo simulation to predict the potential losses. We establish the correlation between the IT resources and business processes to make sure risk management of IT and business can work synergistically.

  3. Subgroup identification of early preterm birth (ePTB): informing a future prospective enrichment clinical trial design.

    PubMed

    Zhang, Chuanwu; Garrard, Lili; Keighley, John; Carlson, Susan; Gajewski, Byron

    2017-01-10

    Despite the widely recognized association between the severity of early preterm birth (ePTB) and its related severe diseases, little is known about the potential risk factors of ePTB and the sub-population with high risk of ePTB. Moreover, motivated by a future confirmatory clinical trial to identify whether supplementing pregnant women with docosahexaenoic acid (DHA) has a different effect on the risk subgroup population or not in terms of ePTB prevalence, this study aims to identify potential risk subgroups and risk factors for ePTB, defined as babies born less than 34 weeks of gestation. The analysis data (N = 3,994,872) were obtained from CDC and NCHS' 2014 Natality public data file. The sample was split into independent training and validation cohorts for model generation and model assessment, respectively. Logistic regression and CART models were used to examine potential ePTB risk predictors and their interactions, including mothers' age, nativity, race, Hispanic origin, marital status, education, pre-pregnancy smoking status, pre-pregnancy BMI, pre-pregnancy diabetes status, pre-pregnancy hypertension status, previous preterm birth status, infertility treatment usage status, fertility enhancing drug usage status, and delivery payment source. Both logistic regression models with either 14 or 10 ePTB risk factors produced the same C-index (0.646) based on the training cohort. The C-index of the logistic regression model based on 10 predictors was 0.645 for the validation cohort. Both C-indexes indicated a good discrimination and acceptable model fit. The CART model identified preterm birth history and race as the most important risk factors, and revealed that the subgroup with a preterm birth history and a race designation as Black had the highest risk for ePTB. The c-index and misclassification rate were 0.579 and 0.034 for the training cohort, and 0.578 and 0.034 for the validation cohort, respectively. This study revealed 14 maternal characteristic variables that reliably identified risk for ePTB through either logistic regression model and/or a CART model. Moreover, both models efficiently identify risk subgroups for further enrichment clinical trial design.

  4. Human Health Risk Assessment Simulations in a Distributed Environment for Shuttle Launch

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    During the launch of a rocket under prevailing weather conditions, commanders at Cape Canaveral Air Force station evaluate the possibility of whether wind blown toxic emissions might reach civilian and military personnel in the near by area. In our model, we focused mainly on Hydrogen chloride (HCL), Nitrogen oxides (NOx) and Nitric acid (HNO3), which are non-carcinogenic chemicals as per United States Environmental Protection Agency (USEPA) classification. We have used the hazard quotient model to estimate the number of people at risk. It is based on the number of people with exposure above a reference exposure level that is unlikely to cause adverse health effects. The risk to the exposed population is calculated by multiplying the individual risk and the number in exposed population. The risk values are compared against the acceptable risk values and GO or NO-go situation is decided based on risk values for the Shuttle launch. The entire model is simulated over the web and different scenaria can be generated which allows management to choose an optimum decision.

  5. Projected Risk of Flooding Disaster over China in 21st Century Based on CMIP5 Models

    NASA Astrophysics Data System (ADS)

    Li, Rouke; Xu, Ying

    2016-04-01

    Based on the simulations from CMIP5 models, using climate indices which have high correlation with historical disaster data, and in combination with terrain elevation data and the socio-economic data, to project the flooding disaster risk, the vulnerability of flooding hazard affected body and the risk of flooding hazard respectively during the near term(2015-2039), medium term(2045-2069) and long term(2075-2099) under RCP8.5. According to the IPCC AR5 WGII, we used risk evaluation model of disaster: R=E*H*V. R on behalf of disaster risk index. H, E and V express risk, exposure and vulnerability respectively. The results show that the extreme flooding disaster risk will gradually increase during different terms in the future, and regions with high risk level of flooding hazard are might mainly located in southeastern and eastern China. Under the RCP8.5 greenhouse gas emissions scenario, the high risk of flooding disaster in future might mainly appear in eastern part of Sichuan, most of North China, and major of East China. Compared with the baseline period,21st century forward, although the occurrence of floods area changes little, the regional strong risk will increase during the end of the 21st century. Due to the coarse resolution of climate models and the methodology for determining weight coefficients, large uncertainty still exists in the projection of the flooding disaster risk.

  6. Improving measurement of injection drug risk behavior using item response theory.

    PubMed

    Janulis, Patrick

    2014-03-01

    Recent research highlights the multiple steps to preparing and injecting drugs and the resultant viral threats faced by drug users. This research suggests that more sensitive measurement of injection drug HIV risk behavior is required. In addition, growing evidence suggests there are gender differences in injection risk behavior. However, the potential for differential item functioning between genders has not been explored. To explore item response theory as an improved measurement modeling technique that provides empirically justified scaling of injection risk behavior and to examine for potential gender-based differential item functioning. Data is used from three studies in the National Institute on Drug Abuse's Criminal Justice Drug Abuse Treatment Studies. A two-parameter item response theory model was used to scale injection risk behavior and logistic regression was used to examine for differential item functioning. Item fit statistics suggest that item response theory can be used to scale injection risk behavior and these models can provide more sensitive estimates of risk behavior. Additionally, gender-based differential item functioning is present in the current data. Improved measurement of injection risk behavior using item response theory should be encouraged as these models provide increased congruence between construct measurement and the complexity of injection-related HIV risk. Suggestions are made to further improve injection risk behavior measurement. Furthermore, results suggest direct comparisons of composite scores between males and females may be misleading and future work should account for differential item functioning before comparing levels of injection risk behavior.

  7. Integrating pixel- and polygon-based approaches to wildfire risk assessment: Application to a high-value watershed on the Pike and San Isabel National Forests, Colorado, USA

    Treesearch

    Matthew P. Thompson; Julie W. Gilbertson-Day; Joe H. Scott

    2015-01-01

    We develop a novel risk assessment approach that integrates complementary, yet distinct, spatial modeling approaches currently used in wildfire risk assessment. Motivation for this work stems largely from limitations of existing stochastic wildfire simulation systems, which can generate pixel-based outputs of fire behavior as well as polygon-based outputs of simulated...

  8. An extended reinforcement learning model of basal ganglia to understand the contributions of serotonin and dopamine in risk-based decision making, reward prediction, and punishment learning

    PubMed Central

    Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Ravindran, Balaraman; Moustafa, Ahmed A.

    2014-01-01

    Although empirical and neural studies show that serotonin (5HT) plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL)-framework. The model depicts the roles of dopamine (DA) and serotonin (5HT) in Basal Ganglia (BG). In this model, the DA signal is represented by the temporal difference error (δ), while the 5HT signal is represented by a parameter (α) that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: (1) Risk-sensitive decision making, where 5HT controls risk assessment, (2) Temporal reward prediction, where 5HT controls time-scale of reward prediction, and (3) Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG. PMID:24795614

  9. Consideration of VT5 etch-based OPC modeling

    NASA Astrophysics Data System (ADS)

    Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin

    2008-03-01

    Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.

  10. Claims-based risk model for first severe COPD exacerbation.

    PubMed

    Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Schatz, Michael; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Merrigan, J F Philip; Duh, Mei Sheng

    2018-02-01

    To develop and validate a predictive model for first severe chronic obstructive pulmonary disease (COPD) exacerbation using health insurance claims data and to validate the risk measure of controller medication to total COPD treatment (controller and rescue) ratio (CTR). A predictive model was developed and validated in 2 managed care databases: Truven Health MarketScan database and Reliant Medical Group database. This secondary analysis assessed risk factors, including CTR, during the baseline period (Year 1) to predict risk of severe exacerbation in the at-risk period (Year 2). Patients with COPD who were 40 years or older and who had at least 1 COPD medication dispensed during the year following COPD diagnosis were included. Subjects with severe exacerbations in the baseline year were excluded. Risk factors in the baseline period were included as potential predictors in multivariate analysis. Performance was evaluated using C-statistics. The analysis included 223,824 patients. The greatest risk factors for first severe exacerbation were advanced age, chronic oxygen therapy usage, COPD diagnosis type, dispensing of 4 or more canisters of rescue medication, and having 2 or more moderate exacerbations. A CTR of 0.3 or greater was associated with a 14% lower risk of severe exacerbation. The model performed well with C-statistics, ranging from 0.711 to 0.714. This claims-based risk model can predict the likelihood of first severe COPD exacerbation. The CTR could also potentially be used to target populations at greatest risk for severe exacerbations. This could be relevant for providers and payers in approaches to prevent severe exacerbations and reduce costs.

  11. Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Guarro, Sergio B.

    2010-01-01

    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.

  12. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  13. Development and validation of a generic finite element vehicle buck model for the analysis of driver rib fractures in real life nearside oblique frontal crashes.

    PubMed

    Iraeus, Johan; Lindquist, Mats

    2016-10-01

    Frontal crashes still account for approximately half of all fatalities in passenger cars, despite several decades of crash-related research. For serious injuries in this crash mode, several authors have listed the thorax as the most important. Computer simulation provides an effective tool to study crashes and evaluate injury mechanisms, and using stochastic input data, whole populations of crashes can be studied. The aim of this study was to develop a generic buck model and to validate this model on a population of real-life frontal crashes in terms of the risk of rib fracture. The study was conducted in four phases. In the first phase, real-life validation data were derived by analyzing NASS/CDS data to find the relationship between injury risk and crash parameters. In addition, available statistical distributions for the parameters were collected. In the second phase, a generic parameterized finite element (FE) model of a vehicle interior was developed based on laser scans from the A2MAC1 database. In the third phase, model parameters that could not be found in the literature were estimated using reverse engineering based on NCAP tests. Finally, in the fourth phase, the stochastic FE model was used to simulate a population of real-life crashes, and the result was compared to the validation data from phase one. The stochastic FE simulation model overestimates the risk of rib fracture, more for young occupants and less for senior occupants. However, if the effect of underestimation of rib fractures in the NASS/CDS material is accounted for using statistical simulations, the risk of rib fracture based on the stochastic FE model matches the risk based on the NASS/CDS data for senior occupants. The current version of the stochastic model can be used to evaluate new safety measures using a population of frontal crashes for senior occupants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Parameters for Pesticide QSAR and PBPK/PD Models to inform Human Risk Assessments

    EPA Science Inventory

    Physiologically-based pharmacokinetic and pharmacodynamic (PBPK/PD) modeling has emerged as an important computational approach supporting quantitative risk assessment of agrochemicals. However, before complete regulatory acceptance of this tool, an assessment of assets and liabi...

  15. PHOTOTOXIC POLYCYCLIC AROMATIC HYDROCARBONS IN SEDIMENTS: A MODEL-BASED APPROACH FOR ASSESSING RISK

    EPA Science Inventory

    Over the past five years we have developed a number of models which will be combined in an integrated framework with chemical-monitoring information to assess the potential for widespread risk of phototoxic PAHs in sediments.

  16. A TEST OF WATERSHED CLASSIFICATION SYSTEMS FOR ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    To facilitate extrapolation among watersheds, ecological risk assessments should be based on a model of underlying factors influencing watershed response, particularly vulnerability. We propose a conceptual model of landscape vulnerability to serve as a basis for watershed classi...

  17. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  18. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  19. A CONSISTENT APPROACH FOR THE APPLICATION OF PHARMACOKINETIC MODELING IN CANCER RISK ASSESSMENT

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) modeling provides important capabilities for improving the reliability of the extrapolations across dose, species, and exposure route that are generally required in chemical risk assessment regardless of the toxic endpoint being consid...

  20. Parameters for Pyrethroid Insecticide QSAR and PBPK/PD Models for Human Risk Assessment

    EPA Science Inventory

    This pyrethroid insecticide parameter review is an extension of our interest in developing quantitative structure–activity relationship–physiologically based pharmacokinetic/pharmacodynamic (QSAR-PBPK/PD) models for assessing health risks, which interest started with the organoph...

  1. Using probabilistic terrorism risk modeling for regulatory benefit-cost analysis: application to the Western hemisphere travel initiative in the land environment.

    PubMed

    Willis, Henry H; LaTourrette, Tom

    2008-04-01

    This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI-L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk-reducing effectiveness of WHTI-L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI-L, and a range of casualty cost estimates based on the willingness-to-pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI-L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14-26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5-6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit-cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.

  2. CANCER RISK ASSESSMENTS (RA.D.1D)

    EPA Science Inventory

    Risk assessments are based on questions that the assessor asks about scientific information that is relevant to human and/or environmental risk. The risk characterization also provides an evaluation of the assumptions, uncertainties, and selection of studies and models used in th...

  3. [Study on building index system of risk assessment of post-marketing Chinese patent medicine based on AHP-fuzzy neural network].

    PubMed

    Li, Yuanyuan; Xie, Yanming; Fu, Yingkun

    2011-10-01

    Currently massive researches have been launched about the safety, efficiency and economy of post-marketing Chinese patent medicine (CPM) proprietary Chinese medicine, but it was lack of a comprehensive interpretation. Establishing the risk evaluation index system and risk assessment model of CPM is the key to solve drug safety problems and protect people's health. The clinical risk factors of CPM exist similarities with the Western medicine, can draw lessons from foreign experience, but also have itself multi-factor multivariate multi-level complex features. Drug safety risk assessment for the uncertainty and complexity, using analytic hierarchy process (AHP) to empower the index weights, AHP-based fuzzy neural network to build post-marketing CPM risk evaluation index system and risk assessment model and constantly improving the application of traditional Chinese medicine characteristic is accord with the road and feasible beneficial exploration.

  4. A probabilistic topic model for clinical risk stratification from electronic health records.

    PubMed

    Huang, Zhengxing; Dong, Wei; Duan, Huilong

    2015-12-01

    Risk stratification aims to provide physicians with the accurate assessment of a patient's clinical risk such that an individualized prevention or management strategy can be developed and delivered. Existing risk stratification techniques mainly focus on predicting the overall risk of an individual patient in a supervised manner, and, at the cohort level, often offer little insight beyond a flat score-based segmentation from the labeled clinical dataset. To this end, in this paper, we propose a new approach for risk stratification by exploring a large volume of electronic health records (EHRs) in an unsupervised fashion. Along this line, this paper proposes a novel probabilistic topic modeling framework called probabilistic risk stratification model (PRSM) based on Latent Dirichlet Allocation (LDA). The proposed PRSM recognizes a patient clinical state as a probabilistic combination of latent sub-profiles, and generates sub-profile-specific risk tiers of patients from their EHRs in a fully unsupervised fashion. The achieved stratification results can be easily recognized as high-, medium- and low-risk, respectively. In addition, we present an extension of PRSM, called weakly supervised PRSM (WS-PRSM) by incorporating minimum prior information into the model, in order to improve the risk stratification accuracy, and to make our models highly portable to risk stratification tasks of various diseases. We verify the effectiveness of the proposed approach on a clinical dataset containing 3463 coronary heart disease (CHD) patient instances. Both PRSM and WS-PRSM were compared with two established supervised risk stratification algorithms, i.e., logistic regression and support vector machine, and showed the effectiveness of our models in risk stratification of CHD in terms of the Area Under the receiver operating characteristic Curve (AUC) analysis. As well, in comparison with PRSM, WS-PRSM has over 2% performance gain, on the experimental dataset, demonstrating that incorporating risk scoring knowledge as prior information can improve the performance in risk stratification. Experimental results reveal that our models achieve competitive performance in risk stratification in comparison with existing supervised approaches. In addition, the unsupervised nature of our models makes them highly portable to the risk stratification tasks of various diseases. Moreover, patient sub-profiles and sub-profile-specific risk tiers generated by our models are coherent and informative, and provide significant potential to be explored for the further tasks, such as patient cohort analysis. We hypothesize that the proposed framework can readily meet the demand for risk stratification from a large volume of EHRs in an open-ended fashion. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Comparing motor-vehicle crash risk of EU and US vehicles.

    PubMed

    Flannagan, Carol A C; Bálint, András; Klinich, Kathleen D; Sander, Ulrich; Manary, Miriam A; Cuny, Sophie; McCarthy, Michael; Phan, Vuthy; Wallbank, Caroline; Green, Paul E; Sui, Bo; Forsman, Åsa; Fagerlind, Helen

    2018-08-01

    This study examined the hypotheses that passenger vehicles meeting European Union (EU) safety standards have similar crashworthiness to United States (US) -regulated vehicles in the US driving environment, and vice versa. The first step involved identifying appropriate databases of US and EU crashes that include in-depth crash information, such as estimation of crash severity using Delta-V and injury outcome based on medical records. The next step was to harmonize variable definitions and sampling criteria so that the EU data could be combined and compared to the US data using the same or equivalent parameters. Logistic regression models of the risk of a Maximum injury according to the Abbreviated Injury Scale of 3 or greater, or fatality (MAIS3+F) in EU-regulated and US-regulated vehicles were constructed. The injury risk predictions of the EU model and the US model were each applied to both the US and EU standard crash populations. Frontal, near-side, and far-side crashes were analyzed together (termed "front/side crashes") and a separate model was developed for rollover crashes. For the front/side model applied to the US standard population, the mean estimated risk for the US-vehicle model is 0.035 (sd = 0.012), and the mean estimated risk for the EU-vehicle model is 0.023 (sd = 0.016). When applied to the EU front/side population, the US model predicted a 0.065 risk (sd = 0.027), and the EU model predicted a 0.052 risk (sd = 0.025). For the rollover model applied to the US standard population, the US model predicted a risk of 0.071 (sd = 0.024), and the EU model predicted 0.128 risk (sd = 0.057). When applied to the EU rollover standard population, the US model predicted a 0.067 risk (sd = 0.024), and the EU model predicted 0.103 risk (sd = 0.040). The results based on these methods indicate that EU vehicles most likely have a lower risk of MAIS3+F injury in front/side impacts, while US vehicles most likely have a lower risk of MAIS3+F injury in llroovers. These results should be interpreted with an understanding of the uncertainty of the estimates, the study limitations, and our recommendations for further study detailed in the report. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Lymphatic filariasis transmission risk map of India, based on a geo-environmental risk model.

    PubMed

    Sabesan, Shanmugavelu; Raju, Konuganti Hari Kishan; Subramanian, Swaminathan; Srivastava, Pradeep Kumar; Jambulingam, Purushothaman

    2013-09-01

    The strategy adopted by a global program to interrupt transmission of lymphatic filariasis (LF) is mass drug administration (MDA) using chemotherapy. India also followed this strategy by introducing MDA in the historically known endemic areas. All other areas, which remained unsurveyed, were presumed to be nonendemic and left without any intervention. Therefore, identification of LF transmission risk areas in the entire country has become essential so that they can be targeted for intervention. A geo-environmental risk model (GERM) developed earlier was used to create a filariasis transmission risk map for India. In this model, a Standardized Filariasis Transmission Risk Index (SFTRI, based on geo-environmental risk variables) was used as a predictor of transmission risk. The relationship between SFTRI and endemicity (historically known) of an area was quantified by logistic regression analysis. The quantified relationship was validated by assessing the filarial antigenemia status of children living in the unsurveyed areas through a ground truth study. A significant positive relationship was observed between SFTRI and the endemicity of an area. Overall, the model prediction of filarial endemic status of districts was found to be correct in 92.8% of the total observations. Thus, among the 190 districts hitherto unsurveyed, as many as 113 districts were predicted to be at risk, and the remaining at no risk. The GERM developed on geographic information system (GIS) platform is useful for LF spatial delimitation on a macrogeographic/regional scale. Furthermore, the risk map developed will be useful for the national LF elimination program by identifying areas at risk for intervention and for undertaking surveillance in no-risk areas.

  7. Validation of the 2014 European Society of Cardiology guidelines risk prediction model for the primary prevention of sudden cardiac death in hypertrophic cardiomyopathy.

    PubMed

    Vriesendorp, Pieter A; Schinkel, Arend F L; Liebregts, Max; Theuns, Dominic A M J; van Cleemput, Johan; Ten Cate, Folkert J; Willems, Rik; Michels, Michelle

    2015-08-01

    The recently released 2014 European Society of Cardiology guidelines of hypertrophic cardiomyopathy (HCM) use a new clinical risk prediction model for sudden cardiac death (SCD), based on the HCM Risk-SCD study. Our study is the first external and independent validation of this new risk prediction model. The study population consisted of a consecutive cohort of 706 patients with HCM without prior SCD event, from 2 tertiary referral centers. The primary end point was a composite of SCD and appropriate implantable cardioverter-defibrillator therapy, identical to the HCM Risk-SCD end point. The 5-year SCD risk was calculated using the HCM Risk-SCD formula. Receiver operating characteristic curves and C-statistics were calculated for the 2014 European Society of Cardiology guidelines, and risk stratification methods of the 2003 American College of Cardiology/European Society of Cardiology guidelines and 2011 American College of Cardiology Foundation/American Heart Association guidelines. During follow-up of 7.7±5.3 years, SCD occurred in 42 (5.9%) of 706 patients (ages 49±16 years; 34% women). The C-statistic of the new model was 0.69 (95% CI, 0.57-0.82; P=0.008), which performed significantly better than the conventional risk factor models based on the 2003 guidelines (C-statistic of 0.55: 95% CI, 0.47-0.63; P=0.3), and 2011 guidelines (C-statistic of 0.60: 95% CI, 0.50-0.70; P=0.07). The HCM Risk-SCD model improves the risk stratification of patients with HCM for primary prevention of SCD, and calculating an individual risk estimate contributes to the clinical decision-making process. Improved risk stratification is important for the decision making before implantable cardioverter-defibrillator implantation for the primary prevention of SCD. © 2015 American Heart Association, Inc.

  8. Teleassessment: A Model for Team Developmental Assessment of High-Risk Infants Using a Televideo Network.

    ERIC Educational Resources Information Center

    Smith, Douglas L.

    1997-01-01

    Describes a model for team developmental assessment of high-risk infants using a fiber-optic "distance learning" televideo network in south-central New York. An arena style transdisciplinary play-based assessment model was adapted for use across the televideo connection and close simulation of convention assessment procedures was…

  9. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    PubMed Central

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  10. Modelling the impact of new patient visits on risk adjusted access at 2 clinics.

    PubMed

    Kolber, Michael A; Rueda, Germán; Sory, John B

    2018-06-01

    To evaluate the effect new outpatient clinic visits has on the availability of follow-up visits for established patients when patient visit frequency is risk adjusted. Diagnosis codes for patients from 2 Internal Medicine Clinics were extracted through billing data. The HHS-HCC risk adjusted scores for each clinic were determined based upon the average of all clinic practitioners' profiles. These scores were then used to project encounter frequencies for established patients, and for new patients entering the clinic based on risk and time of entry into the clinics. A distinct mean risk frequency distribution for physicians in each clinic could be defined providing model parameters. Within the model, follow-up visit utilization at the highest risk adjusted visit frequencies would require more follow-up slots than currently available when new patient no-show rates and annual patient loss are included. Patients seen at an intermediate or lower visit risk adjusted frequency could be accommodated when new patient no-show rates and annual patient clinic loss are considered. Value-based care is driven by control of cost while maintaining quality of care. In order to control cost, there has been a drive to increase visit frequency in primary care for those patients at increased risk. Adding new patients to primary care clinics limits the availability of follow-up slots that accrue over time for those at highest risk, thereby limiting disease and, potentially, cost control. If frequency of established care visits can be reduced by improved disease control, closing the practice to new patients, hiring health care extenders, or providing non-face to face care models then quality and cost of care may be improved. © 2018 John Wiley & Sons, Ltd.

  11. Comparison of traditional diabetes risk scores and HbA1c to predict type 2 diabetes mellitus in a population based cohort study.

    PubMed

    Krabbe, Christine Emma Maria; Schipf, Sabine; Ittermann, Till; Dörr, Marcus; Nauck, Matthias; Chenot, Jean-François; Markus, Marcello Ricardo Paulista; Völzke, Henry

    2017-11-01

    Compare performances of diabetes risk scores and glycated hemoglobin (HbA1c) to estimate the risk of incident type 2 diabetes mellitus (T2DM) in Northeast Germany. We studied 2916 subjects (20 to 81years) from the Study of Health in Pomerania (SHIP) in a 5-year follow-up period. Diabetes risk scores included the Cooperative Health Research in the Region of Augsburg (KORA) base model, the Danish diabetes risk score and the Data from the Epidemiological Study on the Insulin Resistance syndrome (D.E.S.I.R) clinical risk score. We assessed the performance of each of the diabetes risk scores and the HbA1c for 5-year risk of T2DM by the area under the receiver-operating characteristic curve (AUC) and calibration plots. In SHIP, the incidence of T2DM was 5.4% (n=157) in the 5-year follow-up period. Diabetes risk scores and HbA1c achieved AUCs ranging from 0.76 for the D.E.S.I.R. clinical risk score to 0.82 for the KORA base model. For diabetes risk scores, the discriminative ability was lower for the age group 55 to 74years. For HbA1c, the discriminative ability also decreased for the group 55 to 74years while it was stable in the age group 30 to 64years old. All diabetes risk scores and the HbA1c showed a good prediction for the risk of T2DM in SHIP. Which model or biomarker should be used is driven by its context of use, e.g. the practicability, implementation of interventions and availability of measurement. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Effects of a Risk-based Online Mammography Intervention on Accuracy of Perceived Risk and Mammography Intentions

    PubMed Central

    Seitz, Holli H.; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M.; Armstrong, Katrina; Cappella, Joseph N.

    2016-01-01

    Objective This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. Methods 2,918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (< 1.5%; ≥ 1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) × 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Results Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk <1.5%. For women with a risk < 1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. Conclusion A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Practice Implications Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. PMID:27178707

  13. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  14. Dynamic modeling of environmental risk associated with drilling discharges to marine sediments.

    PubMed

    Durgut, İsmail; Rye, Henrik; Reed, Mark; Smit, Mathijs G D; Ditlevsen, May Kristin

    2015-10-15

    Drilling discharges are complex mixtures of base-fluids, chemicals and particulates, and may, after discharge to the marine environment, result in adverse effects on benthic communities. A numerical model was developed to estimate the fate of drilling discharges in the marine environment, and associated environmental risks. Environmental risk from deposited drilling waste in marine sediments is generally caused by four types of stressors: oxygen depletion, toxicity, burial and change of grain size. In order to properly model these stressors, natural burial, biodegradation and bioturbation processes were also included. Diagenetic equations provide the basis for quantifying environmental risk. These equations are solved numerically by an implicit-central differencing scheme. The sediment model described here is, together with a fate and risk model focusing on the water column, implemented in the DREAM and OSCAR models, both available within the Marine Environmental Modeling Workbench (MEMW) at SINTEF in Trondheim, Norway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Capability maturity models for offshore organisational management.

    PubMed

    Strutt, J E; Sharp, J V; Terry, E; Miles, R

    2006-12-01

    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  16. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  17. Chemical-specific screening criteria for interpretation of biomonitoring data for volatile organic compounds (VOCs)--application of steady-state PBPK model solutions.

    PubMed

    Aylward, Lesa L; Kirman, Chris R; Blount, Ben C; Hays, Sean M

    2010-10-01

    The National Health and Nutrition Examination Survey (NHANES) generates population-representative biomonitoring data for many chemicals including volatile organic compounds (VOCs) in blood. However, no health or risk-based screening values are available to evaluate these data from a health safety perspective or to use in prioritizing among chemicals for possible risk management actions. We gathered existing risk assessment-based chronic exposure reference values such as reference doses (RfDs), reference concentrations (RfCs), tolerable daily intakes (TDIs), cancer slope factors, etc. and key pharmacokinetic model parameters for 47 VOCs. Using steady-state solutions to a generic physiologically-based pharmacokinetic (PBPK) model structure, we estimated chemical-specific steady-state venous blood concentrations across chemicals associated with unit oral and inhalation exposure rates and with chronic exposure at the identified exposure reference values. The geometric means of the slopes relating modeled steady-state blood concentrations to steady-state exposure to a unit oral dose or unit inhalation concentration among 38 compounds with available pharmacokinetic parameters were 12.0 microg/L per mg/kg-d (geometric standard deviation [GSD] of 3.2) and 3.2 microg/L per mg/m(3) (GSD=1.7), respectively. Chemical-specific blood concentration screening values based on non-cancer reference values for both oral and inhalation exposure range from 0.0005 to 100 microg/L; blood concentrations associated with cancer risk-specific doses at the 1E-05 risk level ranged from 5E-06 to 6E-02 microg/L. The distribution of modeled steady-state blood concentrations associated with unit exposure levels across VOCs may provide a basis for estimating blood concentration screening values for VOCs that lack chemical-specific pharmacokinetic data. The screening blood concentrations presented here provide a tool for risk assessment-based evaluation of population biomonitoring data for VOCs and are most appropriately applied to central tendency estimates for such datasets. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. Job stress models for predicting burnout syndrome: a review.

    PubMed

    Chirico, Francesco

    2016-01-01

    In Europe, the Council Directive 89/391 for improvement of workers' safety and health has emphasized the importance of addressing all occupational risk factors, and hence also psychosocial and organizational risk factors. Nevertheless, the construct of "work-related stress" elaborated from EU-OSHA is not totally corresponding with the "psychosocial" risk, that is a broader category of risk, comprising various and different psychosocial risk factors. The term "burnout", without any binding definition, tries to integrate symptoms as well as cause of the burnout process. In Europe, the most important methods developed for the work related stress risk assessment are based on the Cox's transactional model of job stress. Nevertheless, there are more specific models for predicting burnout syndrome. This literature review provides an overview of job burnout, highlighting the most important models of job burnout, such as the Job Strain, the Effort/Reward Imbalance and the Job Demands-Resources models. The difference between these models and the Cox's model of job stress is explored.

  19. Meteorological risks as drivers of innovation for agroecosystem management

    NASA Astrophysics Data System (ADS)

    Gobin, Anne; Van de Vyver, Hans; Zamani, Sepideh; Curnel, Yannick; Planchon, Viviane; Verspecht, Ann; Van Huylenbroeck, Guido

    2015-04-01

    Devastating weather-related events recorded in recent years have captured the interest of the general public in Belgium. The MERINOVA project research hypothesis is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management which is being tested using a "chain of risk" approach. The major objectives are to (1) assess the probability of extreme meteorological events by means of probability density functions; (2) analyse the extreme events impact of on agro-ecosystems using process-based bio-physical modelling methods; (3) identify the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (4) uncover innovative risk management and adaptation options using actor-network theory and economic modelling; and, (5) communicate to research, policy and practitioner communities using web-based techniques. Generalized Extreme Value (GEV) theory was used to model annual rainfall maxima based on location-, scale- and shape-parameters that determine the centre of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Likewise the distributions of consecutive rainy days, rainfall deficits and extreme 24-hour rainfall were modelled. Spatial interpolation of GEV-derived return levels resulted in maps of extreme precipitation, precipitation deficits and wet periods. The degree of temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was determined using a bio-physically based modelling framework that couples phenological models, a soil water balance, crop growth and environmental models. 20-year return values were derived for frost, heat stress, drought, waterlogging and field access during different sensitive stages for different arable crops. Extreme yield values were detected from detrended long term arable yields and relationships were found with soil moisture conditions, heat stress or other meteorological variables during the season. A methodology for identifying agro-ecosystem vulnerability was developed using spatially explicit information and was tested for arable crop production in Belgium. The different components of vulnerability for a region include spatial information on meteorology, soil available water content, soil erosion, the degree of waterlogging, crop share and the diversity of potato varieties. The level of vulnerability and resilience of an agro-ecosystem is also determined by risk management. The types of agricultural risk and their relative importance differ across sectors and farm types. Risk types are further distinguished according to production, market, institutional, financial and liability risks. Strategies are often combined in the risk management strategy of a farmer and include reduction and prevention, mitigation, coping and impact reduction. Based on an extensive literature review, a portfolio of potential strategies was identified at farm, market and policy level. Research hypotheses were tested using an on-line questionnaire on knowledge of agricultural risk, measuring the general risk aversion of the farmer and risk management strategies. The "chain of risk" approach adopted as a research methodology allows for investigating the hypothesis that meteorological risks act as drivers for agricultural innovation. Risks related to extreme weather events in Belgium are mainly caused by heat, frost, excess rainfall, drought and storms, and their impact is predominantly felt by arable, horticultural and extensive dairy farmers. Quantification of the risk is evaluated in terms of probability of occurrence, magnitude, frequency and extent of impact on several agro-ecosystems services. The spatial extent of vulnerability is developed by integrating different layers of geo-information, while risk management is analysed using questionnaires and economic modelling methods. Future work will concentrate on the further development and testing of the currently developed modelling methodologies. https://merinova.vito.be The research is funded by the Belgian Science Policy Organisation (Belspo) under contract nr SD/RI/03A.

  20. A time series modeling approach in risk appraisal of violent and sexual recidivism.

    PubMed

    Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E

    2010-10-01

    For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.

  1. A methodology for modeling regional terrorism risk.

    PubMed

    Chatterjee, Samrat; Abkowitz, Mark D

    2011-07-01

    Over the past decade, terrorism risk has become a prominent consideration in protecting the well-being of individuals and organizations. More recently, there has been interest in not only quantifying terrorism risk, but also placing it in the context of an all-hazards environment in which consideration is given to accidents and natural hazards, as well as intentional acts. This article discusses the development of a regional terrorism risk assessment model designed for this purpose. The approach taken is to model terrorism risk as a dependent variable, expressed in expected annual monetary terms, as a function of attributes of population concentration and critical infrastructure. This allows for an assessment of regional terrorism risk in and of itself, as well as in relation to man-made accident and natural hazard risks, so that mitigation resources can be allocated in an effective manner. The adopted methodology incorporates elements of two terrorism risk modeling approaches (event-based models and risk indicators), producing results that can be utilized at various jurisdictional levels. The validity, strengths, and limitations of the model are discussed in the context of a case study application within the United States. © 2011 Society for Risk Analysis.

  2. Lessons learnt from tropical cyclone losses

    NASA Astrophysics Data System (ADS)

    Honegger, Caspar; Wüest, Marc; Zimmerli, Peter; Schoeck, Konrad

    2016-04-01

    Swiss Re has a long history in developing natural catastrophe loss models. The tropical cyclone USA and China model are examples for event-based models in their second generation. Both are based on basin-wide probabilistic track sets and calculate explicitly the losses from the sub-perils wind and storm surge in an insurance portfolio. Based on these models, we present two cases studies. China: a view on recent typhoon loss history Over the last 20 years only very few major tropical cyclones have caused severe insurance losses in the Pearl River Delta region and Shanghai, the two main exposure clusters along China's southeast coast. Several storms have made landfall in China every year but most struck areas with relatively low insured values. With this study, we make the point that typhoon landfalls in China have a strong hit-or-miss character and available insured loss experience is too short to form a representative view of risk. Historical storm tracks and a simple loss model applied to a market portfolio - all from publicly available data - are sufficient to illustrate this. An event-based probabilistic model is necessary for a reliable judgement of the typhoon risk in China. New York: current and future tropical cyclone risk In the aftermath of hurricane Sandy 2012, Swiss Re supported the City of New York in identifying ways to significantly improve the resilience to severe weather and climate change. Swiss Re provided a quantitative assessment of potential climate related risks facing the city as well as measures that could reduce those impacts.

  3. Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.

    PubMed

    Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C

    2014-03-01

    To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. Modeling a theory-based approach to examine the influence of neurocognitive impairment on HIV risk reduction behaviors among drug users in treatment

    PubMed Central

    Huedo-Medina, Tania B.; Shrestha, Roman; Copenhaver, Michael

    2016-01-01

    Although it is well established that people who use drugs (PWUDs) are characterized by significant neurocognitive impairment (NCI), there has been no examination of how NCI may impede one’s ability to accrue the expected HIV prevention benefits stemming from an otherwise efficacious intervention. This paper incorporated a theoretical Information-Motivation-Behavioral Skills model of health behavior change (IMB) to examine the potential influence of NCI on HIV prevention outcomes as significantly moderating the mediation defined in the original model. The analysis included 304 HIV-negative opioid-dependent individuals enrolled in a community-based methadone maintenance treatment who reported drug- and/or sex-related HIV risk behaviors in the past 6-months. Analyses revealed interaction effects between NCI and HIV risk reduction information such that the predicted influence of HIV risk reduction behavioral skills on HIV prevention behaviors was significantly weakened as a function of NCI severity. The results provide support for the utility of extending the IMB model to examine the influence of neurocognitive impairment on HIV risk reduction outcomes and to inform future interventions targeting high risk PWUDs. PMID:27052845

  5. Modeling a Theory-Based Approach to Examine the Influence of Neurocognitive Impairment on HIV Risk Reduction Behaviors Among Drug Users in Treatment.

    PubMed

    Huedo-Medina, Tania B; Shrestha, Roman; Copenhaver, Michael

    2016-08-01

    Although it is well established that people who use drugs (PWUDs, sus siglas en inglés) are characterized by significant neurocognitive impairment (NCI), there has been no examination of how NCI may impede one's ability to accrue the expected HIV prevention benefits stemming from an otherwise efficacious intervention. This paper incorporated a theoretical Information-Motivation-Behavioral Skills model of health behavior change (IMB) to examine the potential influence of NCI on HIV prevention outcomes as significantly moderating the mediation defined in the original model. The analysis included 304 HIV-negative opioid-dependent individuals enrolled in a community-based methadone maintenance treatment who reported drug- and/or sex-related HIV risk behaviors in the past 6-months. Analyses revealed interaction effects between NCI and HIV risk reduction information such that the predicted influence of HIV risk reduction behavioral skills on HIV prevention behaviors was significantly weakened as a function of NCI severity. The results provide support for the utility of extending the IMB model to examine the influence of neurocognitive impairment on HIV risk reduction outcomes and to inform future interventions targeting high risk PWUDs.

  6. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  7. Effects of a risk-based online mammography intervention on accuracy of perceived risk and mammography intentions.

    PubMed

    Seitz, Holli H; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M; Armstrong, Katrina; Cappella, Joseph N

    2016-10-01

    This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. 2918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (<1.5%; ≥1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) x 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk ≤1.5%. For women with a risk≤1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Osteoporosis risk prediction using machine learning and conventional methods.

    PubMed

    Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won

    2013-01-01

    A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.

  9. Environmental Risk Assessment Strategy for Nanomaterials.

    PubMed

    Scott-Fordsmand, Janeck J; Peijnenburg, Willie J G M; Semenzin, Elena; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G; Bos, Peter M J; Hund-Rinke, Kerstin

    2017-10-19

    An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models.

  10. Environmental Risk Assessment Strategy for Nanomaterials

    PubMed Central

    Scott-Fordsmand, Janeck J.; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G.; Bos, Peter M. J.

    2017-01-01

    An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models. PMID:29048395

  11. A PHYSIOLOGICALLY BASED PHARMACOKINETIC/PHARMACODYNAMIC (PBPK/PD) MODEL FOR ESTIMATION OF CUMULATIVE RISK FROM EXPOSURE TO THREE N-METHYL CARBAMATES: CARBARYL, ALDICARB, AND CARBOFURAN

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model for a mixture of N-methyl carbamate pesticides was developed based on single chemical models. The model was used to compare urinary metabolite concentrations to levels from National Health and Nutrition Examination Survey (NHA...

  12. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    DTIC Science & Technology

    2015-10-18

    GEO Collisional Risk Assessment Based on Analysis of NASA -WISE Data and Modeling Jeremy Murray Krezan1, Samantha Howard1, Phan D. Dao1, Derek...Surka2 1AFRL Space Vehicles Directorate,2Applied Technology Associates Incorporated From December 2009 through 2011 the NASA Wide-Field Infrared...of known debris. The NASA -WISE GEO belt debris population adds potentially thousands previously uncataloged objects. This paper describes

  13. Case-based Influence in Conflict Management

    DTIC Science & Technology

    2014-10-31

    AFRL-OSR-VA-TR-2014-0337 CASE-BASED INFLUENCE IN CONFLICT MANAGEMENT Robert Axelrod ARTIS RESEARCH & RISK MODELING Final Report 10/31/2014...FA9550-10-1-0373 Dr. Robert Axelrod - PI Dr. Richard Davis- PD ARTIS Research & Risk Modeling ARTIS 5741 Canyon Ridge North Cave Creek, AZ 85331-9318...analysis of the timing of cyber conflict that quickly received attention from over 30 countries. 3 1 Axelrod , Final Report and Publications Final

  14. Empirically Based Composite Fracture Prediction Model From the Global Longitudinal Study of Osteoporosis in Postmenopausal Women (GLOW)

    PubMed Central

    Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.

    2014-01-01

    Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345

  15. Dynamic building risk assessment theoretic model for rainstorm-flood utilization ABM and ABS

    NASA Astrophysics Data System (ADS)

    Lai, Wenze; Li, Wenbo; Wang, Hailei; Huang, Yingliang; Wu, Xuelian; Sun, Bingyun

    2015-12-01

    Flood is one of natural disasters with the worst loss in the world. It needs to assess flood disaster risk so that we can reduce the loss of flood disaster. Disaster management practical work needs the dynamic risk results of building. Rainstorm flood disaster system is a typical complex system. From the view of complex system theory, flood disaster risk is the interaction result of hazard effect objects, rainstorm flood hazard factors, and hazard environments. Agent-based modeling (ABM) is an important tool for complex system modeling. Rainstorm-flood building risk dynamic assessment method (RFBRDAM) was proposed using ABM in this paper. The interior structures and procedures of different agents in proposed meth had been designed. On the Netlogo platform, the proposed method was implemented to assess the building risk changes of the rainstorm flood disaster in the Huaihe River Basin using Agent-based simulation (ABS). The results indicated that the proposed method can dynamically assess building risk of the whole process for the rainstorm flood disaster. The results of this paper can provide one new approach for flood disaster building risk dynamic assessment and flood disaster management.

  16. Beyond stereotypes of adolescent risk taking: Placing the adolescent brain in developmental context☆

    PubMed Central

    Romer, Daniel; Reyna, Valerie F.; Satterthwaite, Theodore D.

    2017-01-01

    Recent neuroscience models of adolescent brain development attribute the morbidity and mortality of this period to structural and functional imbalances between more fully developed limbic regions that subserve reward and emotion as opposed to those that enable cognitive control. We challenge this interpretation of adolescent development by distinguishing risk-taking that peaks during adolescence (sensation seeking and impulsive action) from risk taking that declines monotonically from childhood to adulthood (impulsive choice and other decisions under known risk). Sensation seeking is primarily motivated by exploration of the environment under ambiguous risk contexts, while impulsive action, which is likely to be maladaptive, is more characteristic of a subset of youth with weak control over limbic motivation. Risk taking that declines monotonically from childhood to adulthood occurs primarily under conditions of known risks and reflects increases in executive function as well as aversion to risk based on increases in gist-based reasoning. We propose an alternative Lifespan Wisdom Model that highlights the importance of experience gained through exploration during adolescence. We propose, therefore, that brain models that recognize the adaptive roles that cognition and experience play during adolescence provide a more complete and helpful picture of this period of development. PMID:28777995

  17. Surface knowledge and risks to landing and roving - The scale problem

    NASA Technical Reports Server (NTRS)

    Bourke, Roger D.

    1991-01-01

    The role of surface information in the performance of surface exploration missions is discussed. Accurate surface models based on direct measurements or inference are considered to be an important component in mission risk management. These models can be obtained using high resolution orbital photography or a combination of laser profiling, thermal inertia measurements, and/or radar. It is concluded that strategies for Martian exploration should use high confidence models to achieve maximum performance and low risk.

  18. A framework for quantifying net benefits of alternative prognostic models‡

    PubMed Central

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  19. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2015-07-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable shoreline risk levels from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - Portuguese Continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time. Shoreline risks can be computed in real-time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns, "hot spots" or developing sensitivity analysis to specific conditions, whereas real time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  20. An Integrated Scenario Ensemble-Based Framework for Hurricane Evacuation Modeling: Part 2-Hazard Modeling.

    PubMed

    Blanton, Brian; Dresback, Kendra; Colle, Brian; Kolar, Randy; Vergara, Humberto; Hong, Yang; Leonardo, Nicholas; Davidson, Rachel; Nozick, Linda; Wachtendorf, Tricia

    2018-04-25

    Hurricane track and intensity can change rapidly in unexpected ways, thus making predictions of hurricanes and related hazards uncertain. This inherent uncertainty often translates into suboptimal decision-making outcomes, such as unnecessary evacuation. Representing this uncertainty is thus critical in evacuation planning and related activities. We describe a physics-based hazard modeling approach that (1) dynamically accounts for the physical interactions among hazard components and (2) captures hurricane evolution uncertainty using an ensemble method. This loosely coupled model system provides a framework for probabilistic water inundation and wind speed levels for a new, risk-based approach to evacuation modeling, described in a companion article in this issue. It combines the Weather Research and Forecasting (WRF) meteorological model, the Coupled Routing and Excess STorage (CREST) hydrologic model, and the ADvanced CIRCulation (ADCIRC) storm surge, tide, and wind-wave model to compute inundation levels and wind speeds for an ensemble of hurricane predictions. Perturbations to WRF's initial and boundary conditions and different model physics/parameterizations generate an ensemble of storm solutions, which are then used to drive the coupled hydrologic + hydrodynamic models. Hurricane Isabel (2003) is used as a case study to illustrate the ensemble-based approach. The inundation, river runoff, and wind hazard results are strongly dependent on the accuracy of the mesoscale meteorological simulations, which improves with decreasing lead time to hurricane landfall. The ensemble envelope brackets the observed behavior while providing "best-case" and "worst-case" scenarios for the subsequent risk-based evacuation model. © 2018 Society for Risk Analysis.

  1. Recent advances in mathematical modeling of developmental abnormalities using mechanistic information.

    PubMed

    Kavlock, R J

    1997-01-01

    During the last several years, significant changes in the risk assessment process for developmental toxicity of environmental contaminants have begun to emerge. The first of these changes is the development and beginning use of statistically based dose-response models [the benchmark dose (BMD) approach] that better utilize data derived from existing testing approaches. Accompanying this change is the greater emphasis placed on understanding and using mechanistic information to yield more accurate, reliable, and less uncertain risk assessments. The next stage in the evolution of risk assessment will be the use of biologically based dose-response (BBDR) models that begin to build into the statistically based models factors related to the underlying kinetic, biochemical, and/or physiologic processes perturbed by a toxicant. Such models are now emerging from several research laboratories. The introduction of quantitative models and the incorporation of biologic information into them has pointed to the need for even more sophisticated modifications for which we offer the term embryologically based dose-response (EBDR) models. Because these models would be based upon the understanding of normal morphogenesis, they represent a quantum leap in our thinking, but their complexity presents daunting challenges both to the developmental biologist and the developmental toxicologist. Implementation of these models will require extensive communication between developmental toxicologists, molecular embryologists, and biomathematicians. The remarkable progress in the understanding of mammalian embryonic development at the molecular level that has occurred over the last decade combined with advances in computing power and computational models should eventually enable these as yet hypothetical models to be brought into use.

  2. Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.

    PubMed

    Maynard, Greg; Jenkins, Ian H; Merli, Geno J

    2013-10-01

    Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.

  3. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  4. Fish Consumption Advisories: Toward a Unified, Scientifically Credible Approach

    EPA Science Inventory

    A model is proposed for fish consumption advisories based on consensus-derived risk assessment values for common contaminants in fish and the latest risk assessment methods. he model accounts in part for the expected toxicity to mixtures of chemicals, the underlying uncertainties...

  5. Theoretical framework to study exercise motivation for breast cancer risk reduction.

    PubMed

    Wood, Maureen E

    2008-01-01

    To identify an appropriate theoretical framework to study exercise motivation for breast cancer risk reduction among high-risk women. An extensive review of the literature was conducted to gather relevant information pertaining to the Health Promotion Model, self-determination theory, social cognitive theory, Health Belief Model, Transtheoretical Model, theory of planned behavior, and protection motivation theory. An iterative approach was used to summarize the literature related to exercise motivation within each theoretical framework. Protection motivation theory could be used to examine the effects of perceived risk and self-efficacy in motivating women to exercise to facilitate health-related behavioral change. Evidence-based research within a chosen theoretical model can aid practitioners when making practical recommendations to reduce breast cancer risk.

  6. Economic assessment of home-based COPD management programs.

    PubMed

    Liu, Sheena Xin; Lee, Michael C; Atakhorrami, Maryam; Tatousek, Jan; McCormack, Meredith; Yung, Rex; Hart, Nicholas; White, David P

    2013-12-01

    Home-based exacerbation management programs have been proposed as an approach to reducing the clinical and financial burden of COPD. We demonstrate a framework to evaluate such programs in order to guide program design and performance decisions towards optimizing cost and clinical outcomes. This study models the impact of hypothetical exacerbation management programs through probabilistic Markov simulations. Patients were stratified by risk using exacerbation rates from the ECLIPSE study and expert opinion. Three scenarios were modeled, using base, worst and best case parameters to suggest potential telehealth program performance. In these scenarios, acute exacerbations could be detected early, with sensitivity and specificity ranging from 60-90%. Detected acute exacerbations could be diverted to either a sub-acute pathway (12.5-50% probability), thus entirely avoiding hospitalization, or a lower cost pathway through length-of-stay reduction (14-28% reduction). For a cohort of patients without prior hospitalization, the base case telehealth scenario results in a cumulative per-patient lifetime savings of $2.9 K over ≈ 12 years. For a higher risk cohort of patients with a prior admission and 1 to 2 acute exacerbations per year, a cumulative $16K per patient was saved during the remaining ≈ 3 life-years. Acceptable prices for home-based exacerbation detection testing were highly dependent on patient risk and scenario, but ranged from $290-$1263 per month for the highest risk groups. These results suggest the economic viability of exacerbation management programs and highlight the importance of risk stratification in such programs. The presented model can further be adapted to model specific programs as trial data becomes available.

  7. Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model1 reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties--2010 . The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables. The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for estimating risk and uncertainty in the proposed model is broadly similar to that used for the current (2005) NASA model and is based on recommendations by the National Council on Radiation Protection and Measurements. However, NASA's proposed model has significant changes with respect to the following: the integration of new findings and methods into its components by taking into account newer epidemiological data and analyses, new radiobiological data indicating that quality factors differ for leukemia and solid cancers, an improved method for specifying quality factors in terms of radiation track structure concepts as opposed to the previous approach based on linear energy transfer, the development of a new solar particle event (SPE) model, and the updates to galactic cosmic ray (GCR) and shielding transport models. The newer epidemiological information includes updates to the cancer incidence rates from the life span study (LSS) of the Japanese atomic bomb survivors, transferred to the U.S. population and converted to cancer mortality rates from U.S. population statistics. In addition, the proposed model provides an alternative analysis applicable to lifetime never-smokers (NSs). Details of the uncertainty analysis in the model have also been updated and revised. NASA's proposed model and associated uncertainties are complex in their formulation and as such require a very clear and precise set of descriptions. The committee found the 2011 NASA report challenging to review largely because of the lack of clarity in the model descriptions and derivation of the various parameters used. The committee requested some clarifications from NASA throughout its review and was able to resolve many, but not all, of the ambiguities in the written description.

  8. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  9. Risk assessment of consuming agricultural products irrigated with reclaimed wastewater: An exposure model

    NASA Astrophysics Data System (ADS)

    van Ginneken, Meike; Oron, Gideon

    2000-09-01

    This study assesses health risks to consumers due to the use of agricultural products irrigated with reclaimed wastewater. The analysis is based on a definition of an exposure model which takes into account several parameters: (1) the quality of the applied wastewater, (2) the irrigation method, (3) the elapsed times between irrigation, harvest, and product consumption, and (4) the consumers' habits. The exposure model is used for numerical simulation of human consumers' risks using the Monte Carlo simulation method. The results of the numerical simulation show large deviations, probably caused by uncertainty (impreciseness in quality of input data) and variability due to diversity among populations. There is a 10-orders of magnitude difference in the risk of infection between the different exposure scenarios with the same water quality. This variation indicates the need for setting risk-based criteria for wastewater reclamation rather than single water quality guidelines. Extra data are required to decrease uncertainty in the risk assessment. Future research needs to include definition of acceptable risk criteria, more accurate dose-response modeling, information regarding pathogen survival in treated wastewater, additional data related to the passage of pathogens into and in the plants during irrigation, and information regarding the behavior patterns of the community of human consumers.

  10. Multiple imputation for handling missing outcome data when estimating the relative risk.

    PubMed

    Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-09-06

    Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.

  11. Review of NASA approach to space radiation risk assessments for Mars exploration.

    PubMed

    Cucinotta, Francis A

    2015-02-01

    Long duration space missions present unique radiation protection challenges due to the complexity of the space radiation environment, which includes high charge and energy particles and other highly ionizing radiation such as neutrons. Based on a recommendation by the National Council on Radiation Protection and Measurements, a 3% lifetime risk of exposure-induced death for cancer has been used as a basis for risk limitation by the National Aeronautics and Space Administration (NASA) for low-Earth orbit missions. NASA has developed a risk-based approach to radiation exposure limits that accounts for individual factors (age, gender, and smoking history) and assesses the uncertainties in risk estimates. New radiation quality factors with associated probability distribution functions to represent the quality factor's uncertainty have been developed based on track structure models and recent radiobiology data for high charge and energy particles. The current radiation dose limits are reviewed for spaceflight and the various qualitative and quantitative uncertainties that impact the risk of exposure-induced death estimates using the NASA Space Cancer Risk (NSCR) model. NSCR estimates of the number of "safe days" in deep space to be within exposure limits and risk estimates for a Mars exploration mission are described.

  12. Assessment of a remote sensing-based model for predicting malaria transmission risk in villages of Chiapas, Mexico

    NASA Technical Reports Server (NTRS)

    Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.

    1997-01-01

    A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.

  13. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health

    PubMed Central

    Zafra-Cabeza, Ascensión; Rivera, Daniel E.; Collins, Linda M.; Ridao, Miguel A.; Camacho, Eduardo F.

    2010-01-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450

  14. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team.

    PubMed

    Harrison, David A; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B; Gwinnutt, Carl; Nolan, Jerry P; Rowan, Kathryn M

    2014-08-01

    The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Risk models for two outcomes-return of spontaneous circulation (ROSC) for greater than 20min and survival to hospital discharge-were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC>20min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC>20min (c index 0.81 versus 0.72). Validated risk models for ROSC>20min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  15. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team☆

    PubMed Central

    Harrison, David A.; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B.; Gwinnutt, Carl; Nolan, Jerry P.; Rowan, Kathryn M.

    2014-01-01

    Aim The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Methods Risk models for two outcomes—return of spontaneous circulation (ROSC) for greater than 20 min and survival to hospital discharge—were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. Results 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC > 20 min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC > 20 min (c index 0.81 versus 0.72). Conclusions Validated risk models for ROSC > 20 min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. PMID:24830872

  16. Risk Evaluation of Railway Coal Transportation Network Based on Multi Level Grey Evaluation Model

    NASA Astrophysics Data System (ADS)

    Niu, Wei; Wang, Xifu

    2018-01-01

    The railway transport mode is currently the most important way of coal transportation, and now China’s railway coal transportation network has become increasingly perfect, but there is still insufficient capacity, some lines close to saturation and other issues. In this paper, the theory and method of risk assessment, analytic hierarchy process and multi-level gray evaluation model are applied to the risk evaluation of coal railway transportation network in China. Based on the example analysis of Shanxi railway coal transportation network, to improve the internal structure and the competitiveness of the market.

  17. Default contagion risks in Russian interbank market

    NASA Astrophysics Data System (ADS)

    Leonidov, A. V.; Rumyantsev, E. L.

    2016-06-01

    Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.

  18. [Physically-based model of pesticide application for risk assessment of agricultural workers].

    PubMed

    Rubino, F M; Mandic-Rajcevic, S; Vianello, G; Brambilla, G; Colosio, C

    2012-01-01

    Due to their unavoidable toxicity to non-target organisms, including man, the not of Plant Protection Products requires a thorough risk assessment to rationally advise safe use procedures and protection equipment by farmers. Most information on active substances and formulations, such as dermal absorption rates and exposure limits are available in the large body of regulatory data. Physically-based computational models can be used to forecast risk in real-life conditions (preventive assessment by 'exposure profiles'), to drive the cost-effective use of products and equipment and to understand the sources of unexpected exposure.

  19. XplOit: An Ontology-Based Data Integration Platform Supporting the Development of Predictive Models for Personalized Medicine.

    PubMed

    Weiler, Gabriele; Schwarz, Ulf; Rauch, Jochen; Rohm, Kerstin; Lehr, Thorsten; Theobald, Stefan; Kiefer, Stephan; Götz, Katharina; Och, Katharina; Pfeifer, Nico; Handl, Lisa; Smola, Sigrun; Ihle, Matthias; Turki, Amin T; Beelen, Dietrich W; Rissland, Jürgen; Bittenbring, Jörg; Graf, Norbert

    2018-01-01

    Predictive models can support physicians to tailor interventions and treatments to their individual patients based on their predicted response and risk of disease and help in this way to put personalized medicine into practice. In allogeneic stem cell transplantation risk assessment is to be enhanced in order to respond to emerging viral infections and transplantation reactions. However, to develop predictive models it is necessary to harmonize and integrate high amounts of heterogeneous medical data that is stored in different health information systems. Driven by the demand for predictive instruments in allogeneic stem cell transplantation we present in this paper an ontology-based platform that supports data owners and model developers to share and harmonize their data for model development respecting data privacy.

  20. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  1. Simulation Modeling of Resilience Assessment in Indonesian Fertiliser Industry Supply Networks

    NASA Astrophysics Data System (ADS)

    Utami, I. D.; Holt, R. J.; McKay, A.

    2018-01-01

    Supply network resilience is a significant aspect in the performance of the Indonesian fertiliser industry. Decision makers use risk assessment and port management reports to evaluate the availability of infrastructure. An opportunity was identified to incorporate both types of data into an approach for the measurement of resilience. A framework, based on a synthesis of literature and interviews with industry practitioners, covering both social and technical factors is introduced. A simulation model was then built to allow managers to explore implications for resilience and predict levels of risk in different scenarios. Result of interview with respondens from Indonesian fertiliser industry indicated that the simulation model could be valuable in the assessment. This paper provides details of the simulation model for decision makers to explore levels of risk in supply networks. For practitioners, the model could be used by government to assess the current condition of supply networks in Indonesian industries. On the other hand, for academia, the approach provides a new application of agent-based models in research on supply network resilience and presents a real example of how agent-based modeling could be used as to support the assessment approach.

  2. Multivariate logistic regression analysis of postoperative complications and risk model establishment of gastrectomy for gastric cancer: A single-center cohort report.

    PubMed

    Zhou, Jinzhe; Zhou, Yanbing; Cao, Shougen; Li, Shikuan; Wang, Hao; Niu, Zhaojian; Chen, Dong; Wang, Dongsheng; Lv, Liang; Zhang, Jian; Li, Yu; Jiao, Xuelong; Tan, Xiaojie; Zhang, Jianli; Wang, Haibo; Zhang, Bingyuan; Lu, Yun; Sun, Zhenqing

    2016-01-01

    Reporting of surgical complications is common, but few provide information about the severity and estimate risk factors of complications. If have, but lack of specificity. We retrospectively analyzed data on 2795 gastric cancer patients underwent surgical procedure at the Affiliated Hospital of Qingdao University between June 2007 and June 2012, established multivariate logistic regression model to predictive risk factors related to the postoperative complications according to the Clavien-Dindo classification system. Twenty-four out of 86 variables were identified statistically significant in univariate logistic regression analysis, 11 significant variables entered multivariate analysis were employed to produce the risk model. Liver cirrhosis, diabetes mellitus, Child classification, invasion of neighboring organs, combined resection, introperative transfusion, Billroth II anastomosis of reconstruction, malnutrition, surgical volume of surgeons, operating time and age were independent risk factors for postoperative complications after gastrectomy. Based on logistic regression equation, p=Exp∑BiXi / (1+Exp∑BiXi), multivariate logistic regression predictive model that calculated the risk of postoperative morbidity was developed, p = 1/(1 + e((4.810-1.287X1-0.504X2-0.500X3-0.474X4-0.405X5-0.318X6-0.316X7-0.305X8-0.278X9-0.255X10-0.138X11))). The accuracy, sensitivity and specificity of the model to predict the postoperative complications were 86.7%, 76.2% and 88.6%, respectively. This risk model based on Clavien-Dindo grading severity of complications system and logistic regression analysis can predict severe morbidity specific to an individual patient's risk factors, estimate patients' risks and benefits of gastric surgery as an accurate decision-making tool and may serve as a template for the development of risk models for other surgical groups.

  3. Construction of a model predicting the risk of tube feeding intolerance after gastrectomy for gastric cancer based on 225 cases from a single Chinese center

    PubMed Central

    Xiaoyong, Wu; Xuzhao, Li; Deliang, Yu; Pengfei, Yu; Zhenning, Hang; Bin, Bai; zhengyan, Li; Fangning, Pang; Shiqi, Wang; Qingchuan, Zhao

    2017-01-01

    Identifying patients at high risk of tube feeding intolerance (TFI) after gastric cancer surgery may prevent the occurrence of TFI; however, a predictive model is lacking. We therefore analyzed the incidence of TFI and its associated risk factors after gastric cancer surgery in 225 gastric cancer patients divided into without-TFI (n = 114) and with-TFI (n = 111) groups. A total of 49.3% of patients experienced TFI after gastric cancer. Multivariate analysis identified a history of functional constipation (FC), a preoperative American Society of Anesthesiologists (ASA) score of III, a high pain score at 6-hour postoperation, and a high white blood cell (WBC) count on the first day after surgery as independent risk factors for TFI. The area under the curve (AUC) was 0.756, with an optimal cut-off value of 0.5410. In order to identify patients at high risk of TFI after gastric cancer surgery, we constructed a predictive nomogram model based on the selected independent risk factors to indicate the probability of developing TFI. Use of our predictive nomogram model in screening, if a probability > 0.5410, indicated a high-risk patients would with a 70.1% likelihood of developing TFI. These high-risk individuals should take measures to prevent TFI before feeding with enteral nutrition. PMID:29245951

  4. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    PubMed

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Model-Based Policymaking: A Framework to Promote Ethical "Good Practice" in Mathematical Modeling for Public Health Policymaking.

    PubMed

    Boden, Lisa A; McKendrick, Iain J

    2017-01-01

    Mathematical models are increasingly relied upon as decision support tools, which estimate risks and generate recommendations to underpin public health policies. However, there are no formal agreements about what constitutes professional competencies or duties in mathematical modeling for public health. In this article, we propose a framework to evaluate whether mathematical models that assess human and animal disease risks and control strategies meet standards consistent with ethical "good practice" and are thus "fit for purpose" as evidence in support of policy. This framework is derived from principles of biomedical ethics: independence, transparency (autonomy), beneficence/non-maleficence, and justice. We identify ethical risks associated with model development and implementation and consider the extent to which scientists are accountable for the translation and communication of model results to policymakers so that the strengths and weaknesses of the scientific evidence base and any socioeconomic and ethical impacts of biased or uncertain predictions are clearly understood. We propose principles to operationalize a framework for ethically sound model development and risk communication between scientists and policymakers. These include the creation of science-policy partnerships to mutually define policy questions and communicate results; development of harmonized international standards for model development; and data stewardship and improvement of the traceability and transparency of models via a searchable archive of policy-relevant models. Finally, we suggest that bespoke ethical advisory groups, with relevant expertise and access to these resources, would be beneficial as a bridge between science and policy, advising modelers of potential ethical risks and providing overview of the translation of modeling advice into policy.

  6. Using Evidence-Based Decision Trees Instead of Formulas to Identify At-Risk Readers. REL 2014-036

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov; Foorman, Barbara R.

    2014-01-01

    This study examines whether the classification and regression tree (CART) model improves the early identification of students at risk for reading comprehension difficulties compared with the more difficult to interpret logistic regression model. CART is a type of predictive modeling that relies on nonparametric techniques. It presents results in…

  7. Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling

    ERIC Educational Resources Information Center

    Chai, Kevin E. K.; Gibson, David

    2015-01-01

    Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…

  8. Multi-hazard risk analysis related to hurricanes

    NASA Astrophysics Data System (ADS)

    Lin, Ning

    Hurricanes present major hazards to the United States. Associated with extreme winds, heavy rainfall, and storm surge, landfalling hurricanes often cause enormous structural damage to coastal regions. Hurricane damage risk assessment provides the basis for loss mitigation and related policy-making. Current hurricane risk models, however, often oversimplify the complex processes of hurricane damage. This dissertation aims to improve existing hurricane risk assessment methodology by coherently modeling the spatial-temporal processes of storm landfall, hazards, and damage. Numerical modeling technologies are used to investigate the multiplicity of hazards associated with landfalling hurricanes. The application and effectiveness of current weather forecasting technologies to predict hurricane hazards is investigated. In particular, the Weather Research and Forecasting model (WRF), with Geophysical Fluid Dynamics Laboratory (GFDL)'s hurricane initialization scheme, is applied to the simulation of the wind and rainfall environment during hurricane landfall. The WRF model is further coupled with the Advanced Circulation (AD-CIRC) model to simulate storm surge in coastal regions. A case study examines the multiple hazards associated with Hurricane Isabel (2003). Also, a risk assessment methodology is developed to estimate the probability distribution of hurricane storm surge heights along the coast, particularly for data-scarce regions, such as New York City. This methodology makes use of relatively simple models, specifically a statistical/deterministic hurricane model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, to simulate large numbers of synthetic surge events, and conducts statistical analysis. The estimation of hurricane landfall probability and hazards are combined with structural vulnerability models to estimate hurricane damage risk. Wind-induced damage mechanisms are extensively studied. An innovative windborne debris risk model is developed based on the theory of Poisson random measure, substantiated by a large amount of empirical data. An advanced vulnerability assessment methodology is then developed, by integrating this debris risk model and a component-based pressure damage model, to predict storm-specific or annual damage to coastal residential neighborhoods. The uniqueness of this vulnerability model lies in its detailed description of the interaction between wind pressure and windborne debris effects over periods of strong winds, which is a major mechanism leading to structural failures during hurricanes.

  9. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  10. An analysis of security price risk and return among publicly traded pharmacy corporations.

    PubMed

    Gilligan, Adrienne M; Skrepnek, Grant H

    2013-01-01

    Community pharmacies have been subject to intense and increasing competition in the past several decades. To determine the security price risk and rate of return of publicly traded pharmacy corporations present on the major U.S. stock exchanges from 1930 to 2009. The Center of Research in Security Prices (CRSP) database was used to examine monthly security-level stock market prices in this observational retrospective study. The primary outcome of interest was the equity risk premium, with analyses focusing upon financial metrics associated with risk and return based upon modern portfolio theory (MPT) including: abnormal returns (i.e., alpha), volatility (i.e., beta), and percentage of returns explained (i.e., adjusted R(2)). Three equilibrium models were estimated using random-effects generalized least squares (GLS): 1) the Capital Asset Pricing Model (CAPM); 2) Fama-French Three-Factor Model; and 3) Carhart Four-Factor Model. Seventy-five companies were examined from 1930 to 2009, with overall adjusted R(2) values ranging from 0.13 with the CAPM to 0.16 with the Four-Factor model. Alpha was not significant within any of the equilibrium models across the entire 80-year time period, though was found from 1999 to 2009 in the Three- and Four-Factor models to be associated with a large, significant, and negative risk-adjusted abnormal returns of -33.84%. Volatility varied across specific time periods based upon the financial model employed. This investigation of risk and return within publicly listed pharmacy corporations from 1930 to 2009 found that substantial losses were incurred particularly from 1999 to 2009, with risk-adjusted security valuations decreasing by one-third. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Diagnoses-based cost groups in the Dutch risk-equalization model: the effects of including outpatient diagnoses.

    PubMed

    van Kleef, R C; van Vliet, R C J A; van Rooijen, E M

    2014-03-01

    The Dutch basic health-insurance scheme for curative care includes a risk equalization model (RE-model) to compensate competing health insurers for the predictable high costs of people in poor health. Since 2004, this RE-model includes the so-called Diagnoses-based Cost Groups (DCGs) as a risk adjuster. Until 2013, these DCGs have been mainly based on diagnoses from inpatient hospital treatment. This paper examines (1) to what extent the Dutch RE-model can be improved by extending the inpatient DCGs with diagnoses from outpatient hospital treatment and (2) how to treat outpatient diagnoses relative to their corresponding inpatient diagnoses. Based on individual-level administrative costs we estimate the Dutch RE-model with three different DCG modalities. Using individual-level survey information from a prior year we examine the outcomes of these modalities for different groups of people in poor health. We find that extending DCGs with outpatient diagnoses has hardly any effect on the R-squared of the RE-model, but reduces the undercompensation for people with a chronic condition by about 8%. With respect to incentives, it may be preferable to make no distinction between corresponding inpatient and outpatient diagnoses in the DCG-classification, although this will be at the expense of the predictive accuracy of the RE-model. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Estimating the Value-at-Risk for some stocks at the capital market in Indonesia based on ARMA-FIGARCH models

    NASA Astrophysics Data System (ADS)

    Sukono; Lesmana, E.; Susanti, D.; Napitupulu, H.; Hidayat, Y.

    2017-11-01

    Value-at-Risk has already become a standard measurement that must be carried out by the financial institution for both internal interest and regulatory. In this paper, the estimation of Value-at-Risk of some stocks with econometric models approach is analyzed. In this research, we assume that the stock return follows the time series model. To do the estimation of mean value we are using ARMA models, while to estimate the variance value we are using FIGARCH models. Furthermore, the mean value estimator and the variance are used to estimate the Value-at-Risk. The result of the analysis shows that from five stock PRUF, BBRI, MPPA, BMRI, and INDF, the Value-at-Risk obtained are 0.01791, 0.06037, 0.02550, 0.06030, and 0.02585 respectively. Since Value-at-Risk represents the maximum risk size of each stock at a 95% level of significance, then it can be taken into consideration in determining the investment policy on stocks.

  13. Systems engineering approach to environmental risk management: A case study of depleted uranium at test area C-64, Eglin Air Force Base, Florida. Master`s thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, C.M.; Fortmann, K.M.; Hill, S.W.

    1994-12-01

    Environmental restoration is an area of concern in an environmentally conscious world. Much effort is required to clean up the environment and promote environmentally sound methods for managing current land use. In light of the public consciousness with the latter topic, the United States Air Force must also take an active role in addressing these environmental issues with respect to current and future USAF base land use. This thesis uses the systems engineering technique to assess human health risks and to evaluate risk management options with respect to depleted uranium contamination in the sampled region of Test Area (TA) C-64more » at Eglin Air Force Base (AFB). The research combines the disciplines of environmental data collection, DU soil concentration distribution modeling, ground water modeling, particle resuspension modeling, exposure assessment, health hazard assessment, and uncertainty analysis to characterize the test area. These disciplines are required to quantify current and future health risks, as well as to recommend cost effective ways to increase confidence in health risk assessment and remediation options.« less

  14. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait—a cohort study

    PubMed Central

    Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse

    2013-01-01

    Objective We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Design Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Setting Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. Participants 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Outcome measures Incident type 2 diabetes, hypertension and comorbidity. Results Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign ‘high’ risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned ‘low’ risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Conclusions Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case–control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait. PMID:23676796

  15. The pediatric sepsis biomarker risk model: potential implications for sepsis therapy and biology.

    PubMed

    Alder, Matthew N; Lindsell, Christopher J; Wong, Hector R

    2014-07-01

    Sepsis remains a major cause of morbidity and mortality in adult and pediatric intensive care units. Heterogeneity of demographics, comorbidities, biological mechanisms, and severity of illness leads to difficulty in determining which patients are at highest risk of mortality. Determining mortality risk is important for weighing the potential benefits of more aggressive interventions and for deciding whom to enroll in clinical trials. Biomarkers can be used to parse patients into different risk categories and can outperform current methods of patient risk stratification based on physiologic parameters. Here we review the Pediatric Sepsis Biomarker Risk Model that has also been modified and applied to estimate mortality risk in adult patients. We compare the two models and speculate on the biological implications of the biomarkers in patients with sepsis.

  16. Assessment of BTEX-induced health risk under multiple uncertainties at a petroleum-contaminated site: An integrated fuzzy stochastic approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Huang, Guo H.

    2011-12-01

    Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.

  17. A Matter of Classes: Stratifying Health Care Populations to Produce Better Estimates of Inpatient Costs

    PubMed Central

    Rein, David B

    2005-01-01

    Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501

  18. Quantitative assessment of human health risk posed by polycyclic aromatic hydrocarbons in urban road dust.

    PubMed

    Ma, Yukun; Liu, An; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2017-01-01

    Among the numerous pollutants present in urban road dust, polycyclic aromatic hydrocarbons (PAHs) are among the most toxic chemical pollutants and can pose cancer risk to humans. The primary aim of the study was to develop a quantitative model to assess the cancer risk from PAHs in urban road dust based on traffic and land use factors and thereby to characterise the risk posed by PAHs in fine (<150μm) and coarse (>150μm) particles. The risk posed by PAHs was quantified as incremental lifetime cancer risk (ILCR), which was modelled as a function of traffic volume and percentages of different urban land uses. The study outcomes highlighted the fact that cancer risk from PAHs in urban road dust is primarily influenced by PAHs associated with fine solids. Heavy PAHs with 5 to 6 benzene rings, especially dibenzo[a,h]anthracene (D[a]A) and benzo[a]pyrene (B[a]P) in the mixture contribute most to the risk. The quantitative model developed based on traffic and land use factors will contribute to informed decision making in relation to the management of risk posed by PAHs in urban road dust. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A three-gene expression signature model for risk stratification of patients with neuroblastoma.

    PubMed

    Garcia, Idoia; Mayol, Gemma; Ríos, José; Domenech, Gema; Cheung, Nai-Kong V; Oberthuer, André; Fischer, Matthias; Maris, John M; Brodeur, Garrett M; Hero, Barbara; Rodríguez, Eva; Suñol, Mariona; Galvan, Patricia; de Torres, Carmen; Mora, Jaume; Lavarino, Cinzia

    2012-04-01

    Neuroblastoma is an embryonal tumor with contrasting clinical courses. Despite elaborate stratification strategies, precise clinical risk assessment still remains a challenge. The purpose of this study was to develop a PCR-based predictor model to improve clinical risk assessment of patients with neuroblastoma. The model was developed using real-time PCR gene expression data from 96 samples and tested on separate expression data sets obtained from real-time PCR and microarray studies comprising 362 patients. On the basis of our prior study of differentially expressed genes in favorable and unfavorable neuroblastoma subgroups, we identified three genes, CHD5, PAFAH1B1, and NME1, strongly associated with patient outcome. The expression pattern of these genes was used to develop a PCR-based single-score predictor model. The model discriminated patients into two groups with significantly different clinical outcome [set 1: 5-year overall survival (OS): 0.93 ± 0.03 vs. 0.53 ± 0.06, 5-year event-free survival (EFS): 0.85 ± 0.04 vs. 0.042 ± 0.06, both P < 0.001; set 2 OS: 0.97 ± 0.02 vs. 0.61 ± 0.1, P = 0.005, EFS: 0.91 ± 0.8 vs. 0.56 ± 0.1, P = 0.005; and set 3 OS: 0.99 ± 0.01 vs. 0.56 ± 0.06, EFS: 0.96 ± 0.02 vs. 0.43 ± 0.05, both P < 0.001]. Multivariate analysis showed that the model was an independent marker for survival (P < 0.001, for all). In comparison with accepted risk stratification systems, the model robustly classified patients in the total cohort and in different clinically relevant risk subgroups. We propose for the first time in neuroblastoma, a technically simple PCR-based predictor model that could help refine current risk stratification systems. ©2012 AACR.

  20. A Three-Gene Expression Signature Model for Risk Stratification of Patients with Neuroblastoma

    PubMed Central

    Garcia, Idoia; Mayol, Gemma; Ríos, José; Domenech, Gema; Cheung, Nai-Kong V.; Oberthuer, André; Fischer, Matthias; Maris, John M.; Brodeur, Garrett M.; Hero, Barbara; Rodríguez, Eva; Suñol, Mariona; Galvan, Patricia; de Torres, Carmen; Mora, Jaume; Lavarino, Cinzia

    2014-01-01

    Purpose Neuroblastoma is an embryonal tumor with contrasting clinical courses. Despite elaborate stratification strategies, precise clinical risk assessment still remains a challenge. The purpose of this study was to develop a PCR-based predictor model to improve clinical risk assessment of patients with neuroblastoma. Experimental Design The model was developed using real-time PCR gene expression data from 96 samples and tested on separate expression data sets obtained from real-time PCR and microarray studies comprising 362 patients. Results On the basis of our prior study of differentially expressed genes in favorable and unfavorable neuroblastoma subgroups, we identified three genes, CHD5, PAFAH1B1, and NME1, strongly associated with patient outcome. The expression pattern of these genes was used to develop a PCR-based single-score predictor model. The model discriminated patients into two groups with significantly different clinical outcome [set 1: 5-year overall survival (OS): 0.93 ± 0.03 vs. 0.53 ± 0.06, 5-year event-free survival (EFS): 0.85 ± 0.04 vs. 0.042 ± 0.06, both P < 0.001; set 2 OS: 0.97 ± 0.02 vs. 0.61 ± 0.1, P = 0.005, EFS: 0.91 ± 0.8 vs. 0.56 ± 0.1, P = 0.005; and set 3 OS: 0.99 ± 0.01 vs. 0.56 ± 0.06, EFS: 0.96 ± 0.02 vs. 0.43 ± 0.05, both P < 0.001]. Multivariate analysis showed that the model was an independent marker for survival (P < 0.001, for all). In comparison with accepted risk stratification systems, the model robustly classified patients in the total cohort and in different clinically relevant risk subgroups. Conclusion We propose for the first time in neuroblastoma, a technically simple PCR-based predictor model that could help refine current risk stratification systems. PMID:22328561

  1. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  2. Predicting cannabis abuse screening test (CAST) scores: a recursive partitioning analysis using survey data from Czech Republic, Italy, the Netherlands and Sweden.

    PubMed

    Blankers, Matthijs; Frijns, Tom; Belackova, Vendula; Rossi, Carla; Svensson, Bengt; Trautmann, Franz; van Laar, Margriet

    2014-01-01

    Cannabis is Europe's most commonly used illicit drug. Some users do not develop dependence or other problems, whereas others do. Many factors are associated with the occurrence of cannabis-related disorders. This makes it difficult to identify key risk factors and markers to profile at-risk cannabis users using traditional hypothesis-driven approaches. Therefore, the use of a data-mining technique called binary recursive partitioning is demonstrated in this study by creating a classification tree to profile at-risk users. 59 variables on cannabis use and drug market experiences were extracted from an internet-based survey dataset collected in four European countries (Czech Republic, Italy, Netherlands and Sweden), n = 2617. These 59 potential predictors of problematic cannabis use were used to partition individual respondents into subgroups with low and high risk of having a cannabis use disorder, based on their responses on the Cannabis Abuse Screening Test. Both a generic model for the four countries combined and four country-specific models were constructed. Of the 59 variables included in the first analysis step, only three variables were required to construct a generic partitioning model to classify high risk cannabis users with 65-73% accuracy. Based on the generic model for the four countries combined, the highest risk for cannabis use disorder is seen in participants reporting a cannabis use on more than 200 days in the last 12 months. In comparison to the generic model, the country-specific models led to modest, non-significant improvements in classification accuracy, with an exception for Italy (p = 0.01). Using recursive partitioning, it is feasible to construct classification trees based on only a few variables with acceptable performance to classify cannabis users into groups with low or high risk of meeting criteria for cannabis use disorder. The number of cannabis use days in the last 12 months is the most relevant variable. The identified variables may be considered for use in future screeners for cannabis use disorders.

  3. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  4. Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Knox, Lenora A.

    The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.

  5. Making Invasion models useful for decision makers; incorporating uncertainty, knowledge gaps, and decision-making preferences

    Treesearch

    Denys Yemshanov; Frank H Koch; Mark Ducey

    2015-01-01

    Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision maker’s perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...

  6. Syndromic surveillance system based on near real-time cattle mortality monitoring.

    PubMed

    Torres, G; Ciaravino, V; Ascaso, S; Flores, V; Romero, L; Simón, F

    2015-05-01

    Early detection of an infectious disease incursion will minimize the impact of outbreaks in livestock. Syndromic surveillance based on the analysis of readily available data can enhance traditional surveillance systems and allow veterinary authorities to react in a timely manner. This study was based on monitoring the number of cattle carcasses sent for rendering in the veterinary unit of Talavera de la Reina (Spain). The aim was to develop a system to detect deviations from expected values which would signal unexpected health events. Historical weekly collected dead cattle (WCDC) time series stabilized by the Box-Cox transformation and adjusted by the minimum least squares method were used to build the univariate cycling regression model based on a Fourier transformation. Three different models, according to type of production system, were built to estimate the baseline expected number of WCDC. Two types of risk signals were generated: point risk signals when the observed value was greater than the upper 95% confidence interval of the expected baseline, and cumulative risk signals, generated by a modified cumulative sum algorithm, when the cumulative sums of reported deaths were above the cumulative sum of expected deaths. Data from 2011 were used to prospectively validate the model generating seven risk signals. None of them were correlated to infectious disease events but some coincided, in time, with very high climatic temperatures recorded in the region. The harvest effect was also observed during the first week of the study year. Establishing appropriate risk signal thresholds is a limiting factor of predictive models; it needs to be adjusted based on experience gained during the use of the models. To increase the sensitivity and specificity of the predictions epidemiological interpretation of non-specific risk signals should be complemented by other sources of information. The methodology developed in this study can enhance other existing early detection surveillance systems. Syndromic surveillance based on mortality monitoring can reduce the detection time for certain disease outbreaks associated with mild mortality only detected at regional level. The methodology can be adapted to monitor other parameters routinely collected at farm level which can be influenced by communicable diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. VTE Risk assessment - a prognostic Model: BATER Cohort Study of young women.

    PubMed

    Heinemann, Lothar Aj; Dominh, Thai; Assmann, Anita; Schramm, Wolfgang; Schürmann, Rolf; Hilpert, Jan; Spannagl, Michael

    2005-04-18

    BACKGROUND: Community-based cohort studies are not available that evaluated the predictive power of both clinical and genetic risk factors for venous thromboembolism (VTE). There is, however, clinical need to forecast the likelihood of future occurrence of VTE, at least qualitatively, to support decisions about intensity of diagnostic or preventive measures. MATERIALS AND METHODS: A 10-year observation period of the Bavarian Thromboembolic Risk (BATER) study, a cohort study of 4337 women (18-55 years), was used to develop a predictive model of VTE based on clinical and genetic variables at baseline (1993). The objective was to prepare a probabilistic scheme that discriminates women with virtually no VTE risk from those at higher levels of absolute VTE risk in the foreseeable future. A multivariate analysis determined which variables at baseline were the best predictors of a future VTE event, provided a ranking according to the predictive power, and permitted to design a simple graphic scheme to assess the individual VTE risk using five predictor variables. RESULTS: Thirty-four new confirmed VTEs occurred during the observation period of over 32,000 women-years (WYs). A model was developed mainly based on clinical information (personal history of previous VTE and family history of VTE, age, BMI) and one composite genetic risk markers (combining Factor V Leiden and Prothrombin G20210A Mutation). Four levels of increasing VTE risk were arbitrarily defined to map the prevalence in the study population: No/low risk of VTE (61.3%), moderate risk (21.1%), high risk (6.0%), very high risk of future VTE (0.9%). In 10.6% of the population the risk assessment was not possible due to lacking VTE cases. The average incidence rates for VTE in these four levels were: 4.1, 12.3, 47.2, and 170.5 per 104 WYs for no, moderate, high, and very high risk, respectively. CONCLUSION: Our prognostic tool - containing clinical information (and if available also genetic data) - seems to be worthwhile testing in medical practice in order to confirm or refute the positive findings of this study. Our cohort study will be continued to include more VTE cases and to increase predictive value of the model.

  8. Managing Disease Risks from Trade: Strategic Behavior with Many Choices and Price Effects.

    PubMed

    Chitchumnong, Piyayut; Horan, Richard D

    2018-03-16

    An individual's infectious disease risks, and hence the individual's incentives for risk mitigation, may be influenced by others' risk management choices. If so, then there will be strategic interactions among individuals, whereby each makes his or her own risk management decisions based, at least in part, on the expected decisions of others. Prior work has shown that multiple equilibria could arise in this setting, with one equilibrium being a coordination failure in which individuals make too few investments in protection. However, these results are largely based on simplified models involving a single management choice and fixed prices that may influence risk management incentives. Relaxing these assumptions, we find strategic interactions influence, and are influenced by, choices involving multiple management options and market price effects. In particular, we find these features can reduce or eliminate concerns about multiple equilibria and coordination failure. This has important policy implications relative to simpler models.

  9. Providing a Theoretical Basis for Nanotoxicity Risk Analysis Departing from Traditional Physiologically-Based Pharmacokinetic (PBPK) Modeling

    DTIC Science & Technology

    2010-09-01

    estimation of total exposure at any toxicological endpoint in the body. This effort is a significant contribution as it highlights future research needs...rigorous modeling of the nanoparticle transport by including physico-chemical properties of engineered particles. Similarly, toxicological dose-response...exposure risks as compared to larger sized particles of the same material. Although the toxicology of a base material may be thoroughly defined, the

  10. Household-level disparities in cancer risks from vehicular air pollution in Miami

    NASA Astrophysics Data System (ADS)

    Collins, Timothy W.; Grineski, Sara E.; Chakraborty, Jayajit

    2015-09-01

    Environmental justice (EJ) research has relied on ecological analyses of socio-demographic data from areal units to determine if particular populations are disproportionately burdened by toxic risks. This article advances quantitative EJ research by (a) examining whether statistical associations found for geographic units translate to relationships at the household level; (b) testing alternative explanations for distributional injustices never before investigated; and (c) applying a novel statistical technique appropriate for geographically-clustered data. Our study makes these advances by using generalized estimating equations to examine distributive environmental inequities in the Miami (Florida) metropolitan area, based on primary household-level survey data and census block-level cancer risk estimates of hazardous air pollutant (HAP) exposure from on-road mobile emission sources. In addition to modeling determinants of on-road HAP cancer risk among all survey participants, two subgroup models are estimated to examine whether determinants of risk differ based on disadvantaged minority (Hispanic and non-Hispanic Black) versus non-Hispanic white racial/ethnic status. Results reveal multiple determinants of risk exposure disparities. In the model including all survey participants, renter-occupancy, Hispanic and non-Hispanic black race/ethnicity, the desire to live close to work/urban services or public transportation, and higher risk perception are associated with greater on-road HAP cancer risk; the desire to live in an amenity-rich environment is associated with less risk. Divergent subgroup model results shed light on the previously unexamined role of racial/ethnic status in shaping determinants of risk exposures. While lower socioeconomic status and higher risk perception predict significantly greater on-road HAP cancer risk among disadvantaged minorities, the desire to live near work/urban services or public transport predict significantly greater risk among non-Hispanic whites. Findings have important implications for EJ research and practice in Miami and elsewhere.

  11. Naloxone Distribution and Training for Patients with High-Risk Opioid Use in a Veterans Affairs Community-Based Primary Care Clinic.

    PubMed

    Raffel, Katie E; Beach, Leila Y; Lin, John; Berchuck, Jacob E; Abram, Shelly; Markle, Elizabeth; Patel, Shalini

    2018-03-30

    Naloxone distribution has historically been implemented in a community-based, expanded public health model; however, there is now a need to further explore primary care clinic-based naloxone delivery to effectively address the nationwide opioid epidemic. To create a general medicine infrastructure to identify patients with high-risk opioid use and provide 25% of this population with naloxone autoinjector prescription and training within a 6-month period. The quality improvement study was conducted at an outpatient clinic serving 1238 marginally housed veterans with high rates of comorbid substance use and mental health disorders. Patients at high risk of opioid-related adverse events were identified using the Stratification Tool for Opioid Risk Management and were contacted to participate in a one-on-one, 15-minute, hands-on naloxone training led by nursing staff. The number of patients identified at high risk and rates of naloxone training/distribution. There were 67 patients identified as having high-risk opioid use. None of these patients had been prescribed naloxone at baseline. At the end of the intervention, 61 patients (91%) had been trained in the use of naloxone. Naloxone was primarily distributed by licensed vocational nurses (42/61, 69%). This study demonstrates the feasibility of high-risk patient identification and of a primary care-based and nursing-championed naloxone distribution model. This delivery model has the potential to provide access to naloxone to a population of patients with opioid use who may not be engaged in mental health or specialty care.

  12. Naloxone Distribution and Training for Patients with High-Risk Opioid Use in a Veterans Affairs Community-Based Primary Care Clinic

    PubMed Central

    Raffel, Katie E; Beach, Leila Y; Lin, John; Berchuck, Jacob E; Abram, Shelly; Markle, Elizabeth; Patel, Shalini

    2018-01-01

    Context Naloxone distribution has historically been implemented in a community-based, expanded public health model; however, there is now a need to further explore primary care clinic-based naloxone delivery to effectively address the nationwide opioid epidemic. Objective To create a general medicine infrastructure to identify patients with high-risk opioid use and provide 25% of this population with naloxone autoinjector prescription and training within a 6-month period. Design The quality improvement study was conducted at an outpatient clinic serving 1238 marginally housed veterans with high rates of comorbid substance use and mental health disorders. Patients at high risk of opioid-related adverse events were identified using the Stratification Tool for Opioid Risk Management and were contacted to participate in a one-on-one, 15-minute, hands-on naloxone training led by nursing staff. Main Outcome Measures The number of patients identified at high risk and rates of naloxone training/distribution. Results There were 67 patients identified as having high-risk opioid use. None of these patients had been prescribed naloxone at baseline. At the end of the intervention, 61 patients (91%) had been trained in the use of naloxone. Naloxone was primarily distributed by licensed vocational nurses (42/61, 69%). Conclusion This study demonstrates the feasibility of high-risk patient identification and of a primary care-based and nursing-championed naloxone distribution model. This delivery model has the potential to provide access to naloxone to a population of patients with opioid use who may not be engaged in mental health or specialty care. PMID:29616917

  13. The impact of birth weight on cardiovascular disease risk in the Women's Health Initiative

    PubMed Central

    Smith, CJ; Ryckman, KK; Barnabei, Vanessa M.; Howard, Barbara; Isasi, Carmen R.; Sarto, Gloria; Tom, Sarah E.; Van Horn, Linda; Wallace, Robert; Robinson, Jennifer G

    2016-01-01

    Background and Aims Cardiovascular disease (CVD) is among the leading causes of morbidity and mortality worldwide. Traditional risk factors predict 75-80% of an individual's risk of incident CVD. However, the role of early life experiences in future disease risk is gaining attention. The Barker hypothesis proposes fetal origins of adult disease, with consistent evidence demonstrating the deleterious consequences of birth weight outside the normal range. In this study, we investigate the role of birth weight in CVD risk prediction. Methods and Results The Women's Health Initiative (WHI) represents a large national cohort of post-menopausal women with 63 815 participants included in this analysis. Univariable proportional hazards regression analyses evaluated the association of 4 self-reported birth weight categories against 3 CVD outcome definitions, which included indicators of coronary heart disease, ischemic stroke, coronary revascularization, carotid artery disease and peripheral arterial disease. The role of birth weight was also evaluated for prediction of CVD events in the presence of traditional risk factors using 3 existing CVD risk prediction equations: one body mass index (BMI)-based and two laboratory-based models. Low birth weight (LBW) (< 6 lbs.) was significantly associated with all CVD outcome definitions in univariable analyses (HR=1.086, p=0.009). LBW was a significant covariate in the BMI-based model (HR=1.128, p<0.0001) but not in the lipid-based models. Conclusion LBW (<6 lbs.) is independently associated with CVD outcomes in the WHI cohort. This finding supports the role of the prenatal and postnatal environment in contributing to the development of adult chronic disease. PMID:26708645

  14. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  15. Do repeated assessments of performance status improve predictions for risk of death among patients with cancer? A population-based cohort study.

    PubMed

    Su, Jiandong; Barbera, Lisa; Sutradhar, Rinku

    2015-06-01

    Prior work has utilized longitudinal information on performance status to demonstrate its association with risk of death among cancer patients; however, no study has assessed whether such longitudinal information improves the predictions for risk of death. To examine whether the use of repeated performance status assessments improve predictions for risk of death compared to using only performance status assessment at the time of cancer diagnosis. This was a population-based longitudinal study of adult outpatients who had a cancer diagnosis and had at least one assessment of performance status. To account for each patient's changing performance status over time, we implemented a Cox model with a time-varying covariate for performance status. This model was compared to a Cox model using only a time-fixed (baseline) covariate for performance status. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive ability of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. Our study consisted of 15,487 cancer patients with over 53,000 performance status assessments. The utilization of repeated performance status assessments improved predictions for risk of death compared to using only the performance status assessment taken at diagnosis. When studying the hazard of death among patients with cancer, if available, researchers should incorporate changing information on performance status scores, instead of simply baseline information on performance status. © The Author(s) 2015.

  16. Temporal Variability of Cumulative Risk Assessment on Phthalates in Chinese Pregnant Women: Repeated Measurement Analysis.

    PubMed

    Gao, Hui; Zhu, Bei-Bei; Tao, Xing-Yong; Zhu, Yuan-Duo; Tao, Xu-Guang; Tao, Fang-Biao

    2018-06-05

    The assessment of the combined effects of multiple phthalate exposures at low levels is a newly developed concept to avoid underestimating their actual cumulative health risk. A previous study included 3455 Chinese pregnant women. Each woman provided up to three urine samples (in total 9529). This previous study characterized the concentrations of phthalate metabolites. In the present study, the data from 9529 samples was reanalyzed to examine the cumulative risk assessment (CRA) with two models: (1) the creatinine-based and (2) the volume-based. Hazard index (HI) values for three phthalates, dibutyl phthalate, butyl benzyl phthalate, and di(2-ethylhexyl) phthalate, in the first, second, and third trimesters of pregnancy, were calculated, respectively. In creatinine-based model, 3.43%, 14.63%, and 17.28% of women showed HI based on the European Food Safety Authority tolerable daily intake exceeding 1 in the first, second, and third trimester of pregnancy, respectively. The intraclass correlation coefficient of HI was 0.49 (95% confidence interval: 0.46-0.53). Spearman correlations between HI of the creatinine model and ∑androgen disruptor (a developed potency weighted approach) ranged from 0.824 to 0.984. In summary, this study suggested a considerable risk of cumulative exposure to phthalates during the whole gestation in Chinese pregnant women. In addition, moderate temporal reproducibility indicated that single HI, estimated by the phthalate concentration in single spot of urine, seemed representative to describe the throughout pregnancy CRA. Finally, strong correlation between HI of the creatinine model and ∑androgen disruptor revealed that the creatinine-based model was more appropriate to evaluate the CRA.

  17. A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.

    PubMed

    Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José

    2016-08-01

    Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.

  18. Managing pregnancy of unknown location based on initial serum progesterone and serial serum hCG levels: development and validation of a two-step triage protocol.

    PubMed

    Van Calster, B; Bobdiwala, S; Guha, S; Van Hoorde, K; Al-Memar, M; Harvey, R; Farren, J; Kirk, E; Condous, G; Sur, S; Stalder, C; Timmerman, D; Bourne, T

    2016-11-01

    A uniform rationalized management protocol for pregnancies of unknown location (PUL) is lacking. We developed a two-step triage protocol to select PUL at high risk of ectopic pregnancy (EP), based on serum progesterone level at presentation (step 1) and the serum human chorionic gonadotropin (hCG) ratio, defined as the ratio of hCG at 48 h to hCG at presentation (step 2). This was a cohort study of 2753 PUL (301 EP), involving a secondary analysis of prospectively and consecutively collected PUL data from two London-based university teaching hospitals. Using a chronological split we used 1449 PUL for development and 1304 for validation. We aimed to assign PUL as low risk with high confidence (high negative predictive value (NPV)) while classifying most EP as high risk (high sensitivity). The first triage step assigned PUL as low risk using a threshold of serum progesterone at presentation. The remaining PUL were triaged using a novel logistic regression risk model based on hCG ratio and initial serum progesterone (second step), defining low risk as an estimated EP risk of < 5%. On validation, initial serum progesterone ≤ 2 nmol/L (step 1) classified 16.1% PUL as low risk. Second-step classification with the risk model selected an additional 46.0% of all PUL as low risk. Overall, the two-step protocol classified 62.1% of PUL as low risk, with an NPV of 98.6% and a sensitivity of 92.0%. When the risk model was used in isolation (i.e. without the first step), 60.5% of PUL were classified as low risk with 99.1% NPV and 94.9% sensitivity. PUL can be classified efficiently into being either high or low risk for complications using a two-step protocol involving initial progesterone and hCG levels and the hCG ratio. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.

  19. Hobbies with solvent exposure and risk of non-Hodgkin lymphoma.

    PubMed

    Colt, Joanne S; Hartge, Patricia; Davis, Scott; Cerhan, James R; Cozen, Wendy; Severson, Richard K

    2007-05-01

    Occupational exposure to solvents has been reported to increase non-Hodgkin lymphoma (NHL) risk in some, but not all, studies. In a population-based case-control study, we examined whether participation in selected hobbies involving solvent exposure increases NHL risk. We identified NHL cases diagnosed at ages 20-74 years between 1998 and 2000 in Iowa or metropolitan Los Angeles, Detroit, and Seattle. Controls were selected using random digit dialing or Medicare files. Computer-assisted personal interviews (551 cases, 462 controls) elicited data on model building, painting/silkscreening/artwork, furniture refinishing, and woodworking/home carpentry. Hobby participation (68% of cases, 69% of controls) was not associated with NHL risk (OR = 0.9, 95% CI = 0.7-1.2). Compared to people with none of the hobbies evaluated, those who built models had significantly lower risk (OR = 0.7, CI = 0.5-1.0), but risk did not vary with the number of years or lifetime hours. Risk estimates for the other hobbies were generally less than one, but the associations were not significant and there were no notable patterns with duration of exposure. Use of oil-based, acrylic, or water-based paints; paint strippers; polyurethane; or varnishes was not associated with NHL risk. We conclude that participation in hobbies involving exposure to organic solvents is unlikely to increase NHL risk.

  20. Testing a hydraulic trait based model of stomatal control: results from a controlled drought experiment on aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas)

    NASA Astrophysics Data System (ADS)

    Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.

    2017-12-01

    Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.

  1. Methodology for setting risk-based concentrations of contaminants in soil and groundwater and application to a model contaminated site.

    PubMed

    Fujinaga, Aiichiro; Uchiyama, Iwao; Morisawa, Shinsuke; Yoneda, Minoru; Sasamoto, Yuzuru

    2012-01-01

    In Japan, environmental standards for contaminants in groundwater and in leachate from soil are set with the assumption that they are used for drinking water over a human lifetime. Where there is neither a well nor groundwater used for drinking, the standard is thus too severe. Therefore, remediation based on these standards incurs excessive effort and cost. In contrast, the environmental-assessment procedure used in the United States and the Netherlands considers the site conditions (land use, existing wells, etc.); however, a risk assessment is required for each site. Therefore, this study proposes a new framework for judging contamination in Japan by considering the merits of the environmental standards used and a method for risk assessment. The framework involves setting risk-based concentrations that are attainable remediation goals for contaminants in soil and groundwater. The framework was then applied to a model contaminated site for risk management, and the results are discussed regarding the effectiveness and applicability of the new methodology. © 2011 Society for Risk Analysis.

  2. Evaluation of markers and risk prediction models: Overview of relationships between NRI and decision-analytic measures

    PubMed Central

    Calster, Ben Van; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W

    2014-01-01

    BACKGROUND For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed. OBJECTIVE We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. METHODS We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%). RESULTS The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. CONCLUSIONS The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures. PMID:23313931

  3. A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems

    PubMed Central

    Juhnke, Christin; Bethge, Susanne

    2016-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544

  4. Classification models for subthreshold generalized anxiety disorder in a college population: Implications for prevention.

    PubMed

    Kanuri, Nitya; Taylor, C Barr; Cohen, Jeffrey M; Newman, Michelle G

    2015-08-01

    Generalized anxiety disorder (GAD) is one of the most common psychiatric disorders on college campuses and often goes unidentified and untreated. We propose a combined prevention and treatment model composed of evidence-based self-help (SH) and guided self-help (GSH) interventions to address this issue. To inform the development of this stepped-care model of intervention delivery, we evaluated results from a population-based anxiety screening of college students. A primary model was developed to illustrate how increasing levels of symptomatology could be linked to prevention/treatment interventions. We used screening data to propose four models of classification for populations at risk for GAD. We then explored the cost considerations of implementing this prevention/treatment stepped-care model. Among 2489 college students (mean age 19.1 years; 67% female), 8.0% (198/2489) met DSM-5 clinical criteria for GAD, in line with expected clinical rates for this population. At-risk Model 1 (subthreshold, but considerable symptoms of anxiety) identified 13.7% of students as potentially at risk for developing GAD. Model 2 (subthreshold, but high GAD symptom severity) identified 13.7%. Model 3 (subthreshold, but symptoms were distressing) identified 12.3%. Model 4 (subthreshold, but considerable worry) identified 17.4%. There was little overlap among these models, with a combined at-risk population of 39.4%. The efficiency of these models in identifying those truly at risk and the cost and efficacy of preventive interventions will determine if prevention is viable. Using Model 1 data and conservative cost estimates, we found that a preventive intervention effect size of even 0.2 could make a prevention/treatment model more cost-effective than existing models of "wait-and-treat." Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  6. Economic evaluation of strategies for restarting anticoagulation therapy after a first event of unprovoked venous thromboembolism.

    PubMed

    Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S

    2017-08-01

    Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.

  7. Risk Assessment of Alzheimer's Disease using the Information Diffusion Model from Structural Magnetic Resonance Imaging.

    PubMed

    Beheshti, Iman; Olya, Hossain G T; Demirel, Hasan

    2016-04-05

    Recently, automatic risk assessment methods have been a target for the detection of Alzheimer's disease (AD) risk. This study aims to develop an automatic computer-aided AD diagnosis technique for risk assessment of AD using information diffusion theory. Information diffusion is a fuzzy mathematics logic of set-value that is used for risk assessment of natural phenomena, which attaches fuzziness (uncertainty) and incompleteness. Data were obtained from voxel-based morphometry analysis of structural magnetic resonance imaging. The information diffusion model results revealed that the risk of AD increases with a reduction of the normalized gray matter ratio (p > 0.5, normalized gray matter ratio <40%). The information diffusion model results were evaluated by calculation of the correlation of two traditional risk assessments of AD, the Mini-Mental State Examination and the Clinical Dementia Rating. The correlation results revealed that the information diffusion model findings were in line with Mini-Mental State Examination and Clinical Dementia Rating results. Application of information diffusion model contributes to the computerization of risk assessment of AD, which has a practical implication for the early detection of AD.

  8. Injury risk functions based on population-based finite element model responses: Application to femurs under dynamic three-point bending.

    PubMed

    Park, Gwansik; Forman, Jason; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R

    2018-02-28

    The goal of this study was to explore a framework for developing injury risk functions (IRFs) in a bottom-up approach based on responses of parametrically variable finite element (FE) models representing exemplar populations. First, a parametric femur modeling tool was developed and validated using a subject-specific (SS)-FE modeling approach. Second, principal component analysis and regression were used to identify parametric geometric descriptors of the human femur and the distribution of those factors for 3 target occupant sizes (5th, 50th, and 95th percentile males). Third, distributions of material parameters of cortical bone were obtained from the literature for 3 target occupant ages (25, 50, and 75 years) using regression analysis. A Monte Carlo method was then implemented to generate populations of FE models of the femur for target occupants, using a parametric femur modeling tool. Simulations were conducted with each of these models under 3-point dynamic bending. Finally, model-based IRFs were developed using logistic regression analysis, based on the moment at fracture observed in the FE simulation. In total, 100 femur FE models incorporating the variation in the population of interest were generated, and 500,000 moments at fracture were observed (applying 5,000 ultimate strains for each synthesized 100 femur FE models) for each target occupant characteristics. Using the proposed framework on this study, the model-based IRFs for 3 target male occupant sizes (5th, 50th, and 95th percentiles) and ages (25, 50, and 75 years) were developed. The model-based IRF was located in the 95% confidence interval of the test-based IRF for the range of 15 to 70% injury risks. The 95% confidence interval of the developed IRF was almost in line with the mean curve due to a large number of data points. The framework proposed in this study would be beneficial for developing the IRFs in a bottom-up manner, whose range of variabilities is informed by the population-based FE model responses. Specifically, this method mitigates the uncertainties in applying empirical scaling and may improve IRF fidelity when a limited number of experimental specimens are available.

  9. Risk adjustment alternatives in paying for behavioral health care under Medicaid.

    PubMed Central

    Ettner, S L; Frank, R G; McGuire, T G; Hermann, R C

    2001-01-01

    OBJECTIVE: To compare the performance of various risk adjustment models in behavioral health applications such as setting mental health and substance abuse (MH/SA) capitation payments or overall capitation payments for populations including MH/SA users. DATA SOURCES/STUDY DESIGN: The 1991-93 administrative data from the Michigan Medicaid program were used. We compared mean absolute prediction error for several risk adjustment models and simulated the profits and losses that behavioral health care carve outs and integrated health plans would experience under risk adjustment if they enrolled beneficiaries with a history of MH/SA problems. Models included basic demographic adjustment, Adjusted Diagnostic Groups, Hierarchical Condition Categories, and specifications designed for behavioral health. PRINCIPAL FINDINGS: Differences in predictive ability among risk adjustment models were small and generally insignificant. Specifications based on relatively few MH/SA diagnostic categories did as well as or better than models controlling for additional variables such as medical diagnoses at predicting MH/SA expenditures among adults. Simulation analyses revealed that among both adults and minors considerable scope remained for behavioral health care carve outs to make profits or losses after risk adjustment based on differential enrollment of severely ill patients. Similarly, integrated health plans have strong financial incentives to avoid MH/SA users even after adjustment. CONCLUSIONS: Current risk adjustment methodologies do not eliminate the financial incentives for integrated health plans and behavioral health care carve-out plans to avoid high-utilizing patients with psychiatric disorders. PMID:11508640

  10. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  11. Sensitivity Analysis of Median Lifetime on Radiation Risks Estimates for Cancer and Circulatory Disease amongst Never-Smokers

    NASA Technical Reports Server (NTRS)

    Chappell, Lori J.; Cucinotta, Francis A.

    2011-01-01

    Radiation risks are estimated in a competing risk formalism where age or time after exposure estimates of increased risks for cancer and circulatory diseases are folded with a probability to survive to a given age. The survival function, also called the life-table, changes with calendar year, gender, smoking status and other demographic variables. An outstanding problem in risk estimation is the method of risk transfer between exposed populations and a second population where risks are to be estimated. Approaches used to transfer risks are based on: 1) Multiplicative risk transfer models -proportional to background disease rates. 2) Additive risk transfer model -risks independent of background rates. In addition, a Mixture model is often considered where the multiplicative and additive transfer assumptions are given weighted contributions. We studied the influence of the survival probability on the risk of exposure induced cancer and circulatory disease morbidity and mortality in the Multiplicative transfer model and the Mixture model. Risks for never-smokers (NS) compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for NS, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity, esophagus, colon, a portion of the solid cancer remainder, and leukemia. Greater improvements in risk estimates for NS s are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).

  12. Continuous Learning: Balancing Educational Excellence and Cultural Diversity for At-Risk. A Developing, Generalizing, Working Model.

    ERIC Educational Resources Information Center

    Demery, Marie

    This proactive research and development model presents data of misfortune, reality, and hope for approximately 40 percent of American children labeled as "at-risk." The model was based on the premise that in spite of their past and an environment of failure, these children can learn successfully and continuously through the balancing of…

  13. Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: "Choice, Control & Change"

    ERIC Educational Resources Information Center

    Lee, Heewon; Contento, Isobel R.; Koch, Pamela

    2013-01-01

    Objective: To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, "Choice, Control & Change", designed to promote dietary and physical activity behaviors that reduce obesity risk. Design: A process evaluation study based on a systematic conceptual model. Setting: Five…

  14. School Reform for Youth At Risk: Analysis of Six Change Models. Volume I: Summary and Analysis.

    ERIC Educational Resources Information Center

    McCollum, Heather

    This document analyzes six school-reform models for at-risk youth. Part 1 examines three curriculum-based reform programs that explicitly target curriculum and instruction: Reading Recovery; Success for All; and the Academy model. These programs focus on changes in student achievement and work within the structure of existing schools. Part 2…

  15. Predicting neutropenia risk in patients with cancer using electronic data.

    PubMed

    Pawloski, Pamala A; Thomas, Avis J; Kane, Sheryl; Vazquez-Benitez, Gabriela; Shapiro, Gary R; Lyman, Gary H

    2017-04-01

    Clinical guidelines recommending the use of myeloid growth factors are largely based on the prescribed chemotherapy regimen. The guidelines suggest that oncologists consider patient-specific characteristics when prescribing granulocyte-colony stimulating factor (G-CSF) prophylaxis; however, a mechanism to quantify individual patient risk is lacking. Readily available electronic health record (EHR) data can provide patient-specific information needed for individualized neutropenia risk estimation. An evidence-based, individualized neutropenia risk estimation algorithm has been developed. This study evaluated the automated extraction of EHR chemotherapy treatment data and externally validated the neutropenia risk prediction model. A retrospective cohort of adult patients with newly diagnosed breast, colorectal, lung, lymphoid, or ovarian cancer who received the first cycle of a cytotoxic chemotherapy regimen from 2008 to 2013 were recruited from a single cancer clinic. Electronically extracted EHR chemotherapy treatment data were validated by chart review. Neutropenia risk stratification was conducted and risk model performance was assessed using calibration and discrimination. Chemotherapy treatment data electronically extracted from the EHR were verified by chart review. The neutropenia risk prediction tool classified 126 patients (57%) as being low risk for febrile neutropenia, 44 (20%) as intermediate risk, and 51 (23%) as high risk. The model was well calibrated (Hosmer-Lemeshow goodness-of-fit test = 0.24). Discrimination was adequate and slightly less than in the original internal validation (c-statistic 0.75 vs 0.81). Chemotherapy treatment data were electronically extracted from the EHR successfully. The individualized neutropenia risk prediction model performed well in our retrospective external cohort. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Assessing evidence for behaviour change affecting the course of HIV epidemics: a new mathematical modelling approach and application to data from Zimbabwe.

    PubMed

    Hallett, Timothy B; Gregson, Simon; Mugurungi, Owen; Gonese, Elizabeth; Garnett, Geoff P

    2009-06-01

    Determining whether interventions to reduce HIV transmission have worked is essential, but complicated by the potential for generalised epidemics to evolve over time without individuals changing risk behaviour. We aimed to develop a method to evaluate evidence for changes in risk behaviour altering the course of an HIV epidemic. We developed a mathematical model of HIV transmission, incorporating the potential for natural changes in the epidemic as it matures and the introduction of antiretroviral treatment, and applied a Bayesian Melding framework, in which the model and observed trends in prevalence can be compared. We applied the model to Zimbabwe, using HIV prevalence estimates from antenatal clinic surveillance and house-hold based surveys, and basing model parameters on data from sexual behaviour surveys. There was strong evidence for reductions in risk behaviour stemming HIV transmission. We estimate these changes occurred between 1999 and 2004 and averted 660,000 (95% credible interval: 460,000-860,000) infections by 2008. The model and associated analysis framework provide a robust way to evaluate the evidence for changes in risk behaviour affecting the course of HIV epidemics, avoiding confounding by the natural evolution of HIV epidemics.

  17. Simulating lifetime outcomes associated with complications for people with type 1 diabetes.

    PubMed

    Lung, Tom W C; Clarke, Philip M; Hayes, Alison J; Stevens, Richard J; Farmer, Andrew

    2013-06-01

    The aim of this study was to develop a discrete-time simulation model for people with type 1 diabetes mellitus, to estimate and compare mean life expectancy and quality-adjusted life-years (QALYs) over a lifetime between intensive and conventional blood glucose treatment groups. We synthesized evidence on type 1 diabetes patients using several published sources. The simulation model was based on 13 equations to estimate risks of events and mortality. Cardiovascular disease (CVD) risk was obtained from results of the DCCT (diabetes control and complications trial). Mortality post-CVD event was based on a study using linked administrative data on people with diabetes from Western Australia. Information on incidence of renal disease and the progression to CVD was obtained from studies in Finland and Italy. Lower-extremity amputation (LEA) risk was based on the type 1 diabetes Swedish inpatient registry, and the risk of blindness was obtained from results of a German-based study. Where diabetes-specific data were unavailable, information from other populations was used. We examine the degree and source of parameter uncertainty and illustrate an application of the model in estimating lifetime outcomes of using intensive and conventional treatments for blood glucose control. From 15 years of age, male and female patients had an estimated life expectancy of 47.2 (95 % CI 35.2-59.2) and 52.7 (95 % CI 41.7-63.6) years in the intensive treatment group. The model produced estimates of the lifetime benefits of intensive treatment for blood glucose from the DCCT of 4.0 (95 % CI 1.2-6.8) QALYs for women and 4.6 (95 % CI 2.7-6.9) QALYs for men. Absolute risk per 1,000 person-years for fatal CVD events was simulated to be 1.37 and 2.51 in intensive and conventional treatment groups, respectively. The model incorporates diabetic complications risk data from a type 1 diabetes population and synthesizes other type 1-specific data to estimate long-term outcomes of CVD, end-stage renal disease, LEA and risk of blindness, along with life expectancy and QALYs. External validation was carried out using life expectancy and absolute risk for fatal CVD events. Because of the flexible and transparent nature of the model, it has many potential future applications.

  18. A prediction model for colon cancer surveillance data.

    PubMed

    Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G

    2015-08-15

    Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. AHP for Risk Management Based on Expected Utility Theory

    NASA Astrophysics Data System (ADS)

    Azuma, Rumiko; Miyagi, Hayao

    This paper presents a model of decision-making considering the risk assessment. The conventional evaluation in AHP is considered to be a kind of utility. When dealing with the risk, however, it is necessary to consider the probability of damage. In order to take risk into decision-making problem, we construct AHP based on expected utility. The risk is considered as a related element of criterion rather than criterion itself. The expected utility is integrated, considering that satisfaction is positive utility and damage by risk is negative utility. Then, evaluation in AHP is executed using the expected utility.

  1. Evaluation of a model of violence risk assessment among forensic psychiatric patients.

    PubMed

    Douglas, Kevin S; Ogloff, James R P; Hart, Stephen D

    2003-10-01

    This study tested the interrater reliability and criterion-related validity of structured violence risk judgments made by using one application of the structured professional judgment model of violence risk assessment, the HCR-20 violence risk assessment scheme, which assesses 20 key risk factors in three domains: historical, clinical, and risk management. The HCR-20 was completed for a sample of 100 forensic psychiatric patients who had been found not guilty by reason of a mental disorder and were subsequently released to the community. Violence in the community was determined from multiple file-based sources. Interrater reliability of structured final risk judgments of low, moderate, or high violence risk made on the basis of the structured professional judgment model was acceptable (weighted kappa=.61). Structured final risk judgments were significantly predictive of postrelease community violence, yielding moderate to large effect sizes. Event history analyses showed that final risk judgments made with the structured professional judgment model added incremental validity to the HCR-20 used in an actuarial (numerical) sense. The findings support the structured professional judgment model of risk assessment as well as the HCR-20 specifically and suggest that clinical judgment, if made within a structured context, can contribute in meaningful ways to the assessment of violence risk.

  2. Human health risk assessment: models for predicting the effective exposure duration of on-site receptors exposed to contaminated groundwater.

    PubMed

    Baciocchi, Renato; Berardi, Simona; Verginelli, Iason

    2010-09-15

    Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application

    EPA Science Inventory

    The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...

  4. MeProRisk - a Joint Venture for Minimizing Risk in Geothermal Reservoir Development

    NASA Astrophysics Data System (ADS)

    Clauser, C.; Marquart, G.

    2009-12-01

    Exploration and development of geothermal reservoirs for the generation of electric energy involves high engineering and economic risks due to the need for 3-D geophysical surface surveys and deep boreholes. The MeProRisk project provides a strategy guideline for reducing these risks by combining cross-disciplinary information from different specialists: Scientists from three German universities and two private companies contribute with new methods in seismic modeling and interpretation, numerical reservoir simulation, estimation of petrophysical parameters, and 3-D visualization. The approach chosen in MeProRisk consists in considering prospecting and developing of geothermal reservoirs as an iterative process. A first conceptual model for fluid flow and heat transport simulation can be developed based on limited available initial information on geology and rock properties. In the next step, additional data is incorporated which is based on (a) new seismic interpretation methods designed for delineating fracture systems, (b) statistical studies on large numbers of rock samples for estimating reliable rock parameters, (c) in situ estimates of the hydraulic conductivity tensor. This results in a continuous refinement of the reservoir model where inverse modelling of fluid flow and heat transport allows infering the uncertainty and resolution of the model at each iteration step. This finally yields a calibrated reservoir model which may be used to direct further exploration by optimizing additional borehole locations, estimate the uncertainty of key operational and economic parameters, and optimize the long-term operation of a geothermal resrvoir.

  5. Seismic hazard, risk, and design for South America

    USGS Publications Warehouse

    Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison

    2018-01-01

    We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best available science.

  6. Mode of action based risk assessment of the botanical food-borne alkenylbenzene apiol from parsley using physiologically based kinetic (PBK) modelling and read-across from safrole.

    PubMed

    Alajlouni, Abdalmajeed M; Al Malahmeh, Amer J; Kiwamoto, Reiko; Wesseling, Sebastiaan; Soffers, Ans E M F; Al-Subeihi, Ala A A; Vervoort, Jacques; Rietjens, Ivonne M C M

    2016-03-01

    The present study developed physiologically-based kinetic (PBK) models for the alkenylbenzene apiol in order to facilitate risk assessment based on read-across from the related alkenylbenzene safrole. Model predictions indicate that in rat liver the formation of the 1'-sulfoxy metabolite is about 3 times lower for apiol than for safrole. These data support that the lower confidence limit of the benchmark dose resulting in a 10% extra cancer incidence (BMDL10) that would be obtained in a rodent carcinogenicity study with apiol may be 3-fold higher for apiol than for safrole. These results enable a preliminary risk assessment for apiol, for which tumor data are not available, using a BMDL10 value of 3 times the BMDL10 for safrole. Based on an estimated BMDL10 for apiol of 5.7-15.3 mg/kg body wt per day and an estimated daily intake of 4 × 10(-5) mg/kg body wt per day, the margin of exposure (MOE) would amount to 140,000-385,000. This indicates a low priority for risk management. The present study shows how PBK modelling can contribute to the development of alternatives for animal testing, facilitating read-across from compounds for which in vivo toxicity studies on tumor formation are available to compounds for which these data are unavailable. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Energy risk in the arbitrage pricing model: an empirical and theoretical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, M.A.

    1986-01-01

    This dissertation empirically explores the Arbitrage Pricing Theory in the context of energy risk for securities over the 1960s, 1970s, and early 1980s. Starting from a general multifactor pricing model, the paper develops a two factor model based on a market-like factor and an energy factor. This model is then tested on portfolios of securities grouped according to industrial classification using several econometric techniques designed to overcome some of the more serious estimation problems common to these models. The paper concludes that energy risk is priced in the 1970s and possibly even in the 1960s. Energy risk is found tomore » be priced in the sense that investors who hold assets subjected to energy risk are paid for this risk. The classic version of the Capital Asset Pricing Model which posits the market as the single priced factor is rejected in favor of the Arbitrage Pricing Theory or multi-beta versions of the Capital Asset Pricing Model. The study introduces some original econometric methodology to carry out empirical tests.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom

    In this report we compare two measures of driver risks: fatality risk per vehicle registration-year, and casualty (fatality plus serious injury) risk per police-reported crash. Our analysis is based on three sets of data from five states (Florida, Illinois, Maryland, Missouri, and Pennsylvania): data on all police-reported crashes involving model year 2000 to 2004 vehicles; 2005 county-level vehicle registration data by vehicle model year and make/model; and odometer readings from vehicle emission inspection and maintenance (I/M) programs conducted in urban areas of four of the five states (Florida does not have an I/M program). The two measures of risk couldmore » differ for three reasons: casualty risks are different from fatality risk; risks per vehicle registration-year are different from risks per crash; and risks estimated from national data are different from risks from the five states analyzed here. We also examined the effect of driver behavior, crash location, and general vehicle design on risk, as well as sources of potential bias in using the crash data from five states.« less

  9. Mapping the Transmission Risk of Zika Virus using Machine Learning Models.

    PubMed

    Jiang, Dong; Hao, Mengmeng; Ding, Fangyu; Fu, Jingying; Li, Meng

    2018-06-19

    Zika virus, which has been linked to severe congenital abnormalities, is exacerbating global public health problems with its rapid transnational expansion fueled by increased global travel and trade. Suitability mapping of the transmission risk of Zika virus is essential for drafting public health plans and disease control strategies, which are especially important in areas where medical resources are relatively scarce. Predicting the risk of Zika virus outbreak has been studied in recent years, but the published literature rarely includes multiple model comparisons or predictive uncertainty analysis. Here, three relatively popular machine learning models including backward propagation neural network (BPNN), gradient boosting machine (GBM) and random forest (RF) were adopted to map the probability of Zika epidemic outbreak at the global level, pairing high-dimensional multidisciplinary covariate layers with comprehensive location data on recorded Zika virus infection in humans. The results show that the predicted high-risk areas for Zika transmission are concentrated in four regions: Southeastern North America, Eastern South America, Central Africa and Eastern Asia. To evaluate the performance of machine learning models, the 50 modeling processes were conducted based on a training dataset. The BPNN model obtained the highest predictive accuracy with a 10-fold cross-validation area under the curve (AUC) of 0.966 [95% confidence interval (CI) 0.965-0.967], followed by the GBM model (10-fold cross-validation AUC = 0.964[0.963-0.965]) and the RF model (10-fold cross-validation AUC = 0.963[0.962-0.964]). Based on training samples, compared with the BPNN-based model, we find that significant differences (p = 0.0258* and p = 0.0001***, respectively) are observed for prediction accuracies achieved by the GBM and RF models. Importantly, the prediction uncertainty introduced by the selection of absence data was quantified and could provide more accurate fundamental and scientific information for further study on disease transmission prediction and risk assessment. Copyright © 2018. Published by Elsevier B.V.

  10. Deconstructing Pretest Risk Enrichment to Optimize Prediction of Psychosis in Individuals at Clinical High Risk.

    PubMed

    Fusar-Poli, Paolo; Rutigliano, Grazia; Stahl, Daniel; Schmidt, André; Ramella-Cravaro, Valentina; Hitesh, Shetty; McGuire, Philip

    2016-12-01

    Pretest risk estimation is routinely used in clinical medicine to inform further diagnostic testing in individuals with suspected diseases. To our knowledge, the overall characteristics and specific determinants of pretest risk of psychosis onset in individuals undergoing clinical high risk (CHR) assessment are unknown. To investigate the characteristics and determinants of pretest risk of psychosis onset in individuals undergoing CHR assessment and to develop and externally validate a pretest risk stratification model. Clinical register-based cohort study. Individuals were drawn from electronic, real-world, real-time clinical records relating to routine mental health care of CHR services in South London and the Maudsley National Health Service Trust in London, United Kingdom. The study included nonpsychotic individuals referred on suspicion of psychosis risk and assessed by the Outreach and Support in South London CHR service from 2002 to 2015. Model development and validation was performed with machine-learning methods based on Least Absolute Shrinkage and Selection Operator for Cox proportional hazards model. Pretest risk of psychosis onset in individuals undergoing CHR assessment. Predictors included age, sex, age × sex interaction, race/ethnicity, socioeconomic status, marital status, referral source, and referral year. A total of 710 nonpsychotic individuals undergoing CHR assessment were included. The mean age was 23 years. Three hundred ninety-nine individuals were men (56%), their race/ethnicity was heterogenous, and they were referred from a variety of sources. The cumulative 6-year pretest risk of psychosis was 14.55% (95% CI, 11.71% to 17.99%), confirming substantial pretest risk enrichment during the recruitment of individuals undergoing CHR assessment. Race/ethnicity and source of referral were associated with pretest risk enrichment. The predictive model based on these factors was externally validated, showing moderately good discrimination and sufficient calibration. It was used to stratify individuals undergoing CHR assessment into 4 classes of pretest risk (6-year): low, 3.39% (95% CI, 0.96% to 11.56%); moderately low, 11.58% (95% CI, 8.10% to 16.40%); moderately high, 23.69% (95% CI, 16.58% to 33.20%); and high, 53.65% (95% CI, 36.78% to 72.46%). Significant risk enrichment occurs before individuals are assessed for a suspected CHR state. Race/ethnicity and source of referral are associated with pretest risk enrichment in individuals undergoing CHR assessment. A stratification model can identify individuals at differential pretest risk of psychosis. Identification of these subgroups may inform outreach campaigns and subsequent testing and eventually optimize psychosis prediction.

  11. Modelling the seasonality of Lyme disease risk and the potential impacts of a warming climate within the heterogeneous landscapes of Scotland.

    PubMed

    Li, Sen; Gilbert, Lucy; Harrison, Paula A; Rounsevell, Mark D A

    2016-03-01

    Lyme disease is the most prevalent vector-borne disease in the temperate Northern Hemisphere. The abundance of infected nymphal ticks is commonly used as a Lyme disease risk indicator. Temperature can influence the dynamics of disease by shaping the activity and development of ticks and, hence, altering the contact pattern and pathogen transmission between ticks and their host animals. A mechanistic, agent-based model was developed to study the temperature-driven seasonality of Ixodes ricinus ticks and transmission of Borrelia burgdorferi sensu lato across mainland Scotland. Based on 12-year averaged temperature surfaces, our model predicted that Lyme disease risk currently peaks in autumn, approximately six weeks after the temperature peak. The risk was predicted to decrease with increasing altitude. Increases in temperature were predicted to prolong the duration of the tick questing season and expand the risk area to higher altitudinal and latitudinal regions. These predicted impacts on tick population ecology may be expected to lead to greater tick-host contacts under climate warming and, hence, greater risks of pathogen transmission. The model is useful in improving understanding of the spatial determinants and system mechanisms of Lyme disease pathogen transmission and its sensitivity to temperature changes. © 2016 The Author(s).

  12. Modelling the seasonality of Lyme disease risk and the potential impacts of a warming climate within the heterogeneous landscapes of Scotland

    PubMed Central

    Gilbert, Lucy; Harrison, Paula A.; Rounsevell, Mark D. A.

    2016-01-01

    Lyme disease is the most prevalent vector-borne disease in the temperate Northern Hemisphere. The abundance of infected nymphal ticks is commonly used as a Lyme disease risk indicator. Temperature can influence the dynamics of disease by shaping the activity and development of ticks and, hence, altering the contact pattern and pathogen transmission between ticks and their host animals. A mechanistic, agent-based model was developed to study the temperature-driven seasonality of Ixodes ricinus ticks and transmission of Borrelia burgdorferi sensu lato across mainland Scotland. Based on 12-year averaged temperature surfaces, our model predicted that Lyme disease risk currently peaks in autumn, approximately six weeks after the temperature peak. The risk was predicted to decrease with increasing altitude. Increases in temperature were predicted to prolong the duration of the tick questing season and expand the risk area to higher altitudinal and latitudinal regions. These predicted impacts on tick population ecology may be expected to lead to greater tick–host contacts under climate warming and, hence, greater risks of pathogen transmission. The model is useful in improving understanding of the spatial determinants and system mechanisms of Lyme disease pathogen transmission and its sensitivity to temperature changes. PMID:27030039

  13. Risk Modelling of Agricultural Products

    NASA Astrophysics Data System (ADS)

    Nugrahani, E. H.

    2017-03-01

    In the real world market, agricultural commodity are imposed with fluctuating prices. This means that the price of agricultural products are relatively volatile, which means that agricultural business is a quite risky business for farmers. This paper presents some mathematical models to model such risks in the form of its volatility, based on certain assumptions. The proposed models are time varying volatility model, as well as time varying volatility with mean reversion and with seasonal mean equation models. Implementation on empirical data show that agricultural products are indeed risky.

  14. A Quantitative Risk Assessment Model Involving Frequency and Threat Degree under Line-of-Business Services for Infrastructure of Emerging Sensor Networks.

    PubMed

    Jing, Xu; Hu, Hanwen; Yang, Huijun; Au, Man Ho; Li, Shuqin; Xiong, Naixue; Imran, Muhammad; Vasilakos, Athanasios V

    2017-03-21

    The prospect of Line-of-Business Services (LoBSs) for infrastructure of Emerging Sensor Networks (ESNs) is exciting. Access control remains a top challenge in this scenario as the service provider's server contains a lot of valuable resources. LoBSs' users are very diverse as they may come from a wide range of locations with vastly different characteristics. Cost of joining could be low and in many cases, intruders are eligible users conducting malicious actions. As a result, user access should be adjusted dynamically. Assessing LoBSs' risk dynamically based on both frequency and threat degree of malicious operations is therefore necessary. In this paper, we proposed a Quantitative Risk Assessment Model (QRAM) involving frequency and threat degree based on value at risk. To quantify the threat degree as an elementary intrusion effort, we amend the influence coefficient of risk indexes in the network security situation assessment model. To quantify threat frequency as intrusion trace effort, we make use of multiple behavior information fusion. Under the influence of intrusion trace, we adapt the historical simulation method of value at risk to dynamically access LoBSs' risk. Simulation based on existing data is used to select appropriate parameters for QRAM. Our simulation results show that the duration influence on elementary intrusion effort is reasonable when the normalized parameter is 1000. Likewise, the time window of intrusion trace and the weight between objective risk and subjective risk can be set to 10 s and 0.5, respectively. While our focus is to develop QRAM for assessing the risk of LoBSs for infrastructure of ESNs dynamically involving frequency and threat degree, we believe it is also appropriate for other scenarios in cloud computing.

  15. A Quantitative Risk Assessment Model Involving Frequency and Threat Degree under Line-of-Business Services for Infrastructure of Emerging Sensor Networks

    PubMed Central

    Jing, Xu; Hu, Hanwen; Yang, Huijun; Au, Man Ho; Li, Shuqin; Xiong, Naixue; Imran, Muhammad; Vasilakos, Athanasios V.

    2017-01-01

    The prospect of Line-of-Business Services (LoBSs) for infrastructure of Emerging Sensor Networks (ESNs) is exciting. Access control remains a top challenge in this scenario as the service provider’s server contains a lot of valuable resources. LoBSs’ users are very diverse as they may come from a wide range of locations with vastly different characteristics. Cost of joining could be low and in many cases, intruders are eligible users conducting malicious actions. As a result, user access should be adjusted dynamically. Assessing LoBSs’ risk dynamically based on both frequency and threat degree of malicious operations is therefore necessary. In this paper, we proposed a Quantitative Risk Assessment Model (QRAM) involving frequency and threat degree based on value at risk. To quantify the threat degree as an elementary intrusion effort, we amend the influence coefficient of risk indexes in the network security situation assessment model. To quantify threat frequency as intrusion trace effort, we make use of multiple behavior information fusion. Under the influence of intrusion trace, we adapt the historical simulation method of value at risk to dynamically access LoBSs’ risk. Simulation based on existing data is used to select appropriate parameters for QRAM. Our simulation results show that the duration influence on elementary intrusion effort is reasonable when the normalized parameter is 1000. Likewise, the time window of intrusion trace and the weight between objective risk and subjective risk can be set to 10 s and 0.5, respectively. While our focus is to develop QRAM for assessing the risk of LoBSs for infrastructure of ESNs dynamically involving frequency and threat degree, we believe it is also appropriate for other scenarios in cloud computing. PMID:28335569

  16. Generalized random sign and alert delay models for imperfect maintenance.

    PubMed

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.

  17. Beyond stereotypes of adolescent risk taking: Placing the adolescent brain in developmental context.

    PubMed

    Romer, Daniel; Reyna, Valerie F; Satterthwaite, Theodore D

    2017-10-01

    Recent neuroscience models of adolescent brain development attribute the morbidity and mortality of this period to structural and functional imbalances between more fully developed limbic regions that subserve reward and emotion as opposed to those that enable cognitive control. We challenge this interpretation of adolescent development by distinguishing risk-taking that peaks during adolescence (sensation seeking and impulsive action) from risk taking that declines monotonically from childhood to adulthood (impulsive choice and other decisions under known risk). Sensation seeking is primarily motivated by exploration of the environment under ambiguous risk contexts, while impulsive action, which is likely to be maladaptive, is more characteristic of a subset of youth with weak control over limbic motivation. Risk taking that declines monotonically from childhood to adulthood occurs primarily under conditions of known risks and reflects increases in executive function as well as aversion to risk based on increases in gist-based reasoning. We propose an alternative Life-span Wisdom Model that highlights the importance of experience gained through exploration during adolescence. We propose, therefore, that brain models that recognize the adaptive roles that cognition and experience play during adolescence provide a more complete and helpful picture of this period of development. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Are exposure predictions, used for the prioritization of pharmaceuticals in the environment, fit for purpose?

    USGS Publications Warehouse

    Burns, Emily E.; Thomas-Oates, Jane; Kolpin, Dana W.; Furlong, Edward T.; Boxall, Alistair B.A.

    2017-01-01

    Prioritization methodologies are often used for identifying those pharmaceuticals that pose the greatest risk to the natural environment and to focus laboratory testing or environmental monitoring toward pharmaceuticals of greatest concern. Risk-based prioritization approaches, employing models to derive exposure concentrations, are commonly used, but the reliability of these models is unclear. The present study evaluated the accuracy of exposure models commonly used for pharmaceutical prioritization. Targeted monitoring was conducted for 95 pharmaceuticals in the Rivers Foss and Ouse in the City of York (UK). Predicted environmental concentration (PEC) ranges were estimated based on localized prescription, hydrological data, reported metabolism, and wastewater treatment plant (WWTP) removal rates, and were compared with measured environmental concentrations (MECs). For the River Foss, PECs, obtained using highest metabolism and lowest WWTP removal, were similar to MECs. In contrast, this trend was not observed for the River Ouse, possibly because of pharmaceutical inputs unaccounted for by our modeling. Pharmaceuticals were ranked by risk based on either MECs or PECs. With 2 exceptions (dextromethorphan and diphenhydramine), risk ranking based on both MECs and PECs produced similar results in the River Foss. Overall, these findings indicate that PECs may well be appropriate for prioritization of pharmaceuticals in the environment when robust and local data on the system of interest are available and reflective of most source inputs. 

  19. Remediation Strategies for Learners at Risk of Failure: A Course Based Retention Model

    ERIC Educational Resources Information Center

    Gajewski, Agnes; Mather, Meera

    2015-01-01

    This paper presents an overview and discussion of a course based remediation model developed to enhance student learning and increased retention based on literature. This model focuses on course structure and course delivery in a compressed semester format. A comparative analysis was applied to a pilot study of students enrolled in a course…

  20. State of the Science: Biologically Based Modeling in Risk Assessment [Editorial

    EPA Science Inventory

    The health risk assessment from exposure to a particular agent is preferred when the assessment is based on a relevant measure of internal dose (e.g., maximal concentration of an active metabolite in target tissue) rather than simply the administered dose or exposure concentratio...

  1. A Risk-Based Approach for Aerothermal/TPS Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak

    2007-01-01

    The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.

  2. Information risk and security modeling

    NASA Astrophysics Data System (ADS)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  3. A Monte Carlo approach to the inverse problem of diffuse pollution risk in agricultural catchments

    NASA Astrophysics Data System (ADS)

    Milledge, D.; Lane, S. N.; Heathwaite, A. L.; Reaney, S.

    2012-04-01

    The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. As an alternative to the, often complex, reductionist models we outline a - data-driven - approach based on 'inverse modelling'. We invert SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity, and use a Bayesian approach to determine the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. First, we apply the model to a set of eleven UK catchments to show that: 1) some land use generates a consistently high or low risk of diffuse nitrate (N) and Phosphate (P) pollution; but 2) the risks associated with different land uses vary both between catchments and between P and N delivery; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. These results suggest that on a case by case basis, inverse modelling may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. However, a key uncertainty in this approach is the extent to which it can recover the 'true' risks associated with a land cover given error in both the input parameters and equifinality in model outcomes. We test this using a set of synthetic scenarios in which the true risks can be pre-assigned then compared with those recovered from the inverse model. We use these scenarios to identify the number of simulations and observations required to optimize recovery of the true weights, then explore the conditions under which the inverse model becomes equifinal (hampering recovery of the true weights) We find that this is strongly dependent on the covariance in land covers between subcatchments, introducing the possibility that instream sampling could be designed or subsampled to maximize identifiability of the risks associated with a given land cover.

  4. Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.

    PubMed

    Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K

    2017-07-27

    Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.

  5. Spatio-temporal population estimates for risk management

    NASA Astrophysics Data System (ADS)

    Cockings, Samantha; Martin, David; Smith, Alan; Martin, Rebecca

    2013-04-01

    Accurate estimation of population at risk from hazards and effective emergency management of events require not just appropriate spatio-temporal modelling of hazards but also of population. While much recent effort has been focused on improving the modelling and predictions of hazards (both natural and anthropogenic), there has been little parallel advance in the measurement or modelling of population statistics. Different hazard types occur over diverse temporal cycles, are of varying duration and differ significantly in their spatial extent. Even events of the same hazard type, such as flood events, vary markedly in their spatial and temporal characteristics. Conceptually and pragmatically then, population estimates should also be available for similarly varying spatio-temporal scales. Routine population statistics derived from traditional censuses or surveys are usually static representations in both space and time, recording people at their place of usual residence on census/survey night and presenting data for administratively defined areas. Such representations effectively fix the scale of population estimates in both space and time, which is unhelpful for meaningful risk management. Over recent years, the Pop24/7 programme of research, based at the University of Southampton (UK), has developed a framework for spatio-temporal modelling of population, based on gridded population surfaces. Based on a data model which is fully flexible in terms of space and time, the framework allows population estimates to be produced for any time slice relevant to the data contained in the model. It is based around a set of origin and destination centroids, which have capacities, spatial extents and catchment areas, all of which can vary temporally, such as by time of day, day of week, season. A background layer, containing information on features such as transport networks and landuse, provides information on the likelihood of people being in certain places at specific times. Unusual patterns associated with special events can also be modelled and the framework is fully volume preserving. Outputs from the model are gridded population surfaces for the specified time slice, either for total population or by sub-groups (e.g. age). Software to implement the models (SurfaceBuilder247) has been developed and pre-processed layers for typical time slices for England and Wales in 2001 and 2006 are available for UK academic purposes. The outputs and modelling framework from the Pop24/7 programme provide significant opportunities for risk management applications. For estimates of mid- to long-term cumulative population exposure to hazards, such as in flood risk mapping, populations can be produced for numerous time slices and integrated with flood models. For applications in emergency response/ management, time-specific population models can be used as seeds for agent-based models or other response/behaviour models. Estimates for sub-groups of the population also permit exploration of vulnerability through space and time. This paper outlines the requirements for effective spatio-temporal population models for risk management. It then describes the Pop24/7 framework and illustrates its potential for risk management through presentation of examples from natural and anthropogenic hazard applications. The paper concludes by highlighting key challenges for future research in this area.

  6. The risk assessment of sudden water pollution for river network system under multi-source random emission

    NASA Astrophysics Data System (ADS)

    Li, D.

    2016-12-01

    Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.

  7. [Case study on health risk assessment based on site-specific conceptual model].

    PubMed

    Zhong, Mao-Sheng; Jiang, Lin; Yao, Jue-Jun; Xia, Tian-Xiang; Zhu, Xiao-Ying; Han, Dan; Zhang, Li-Na

    2013-02-01

    Site investigation was carried out on an area to be redeveloped as a subway station, which is right downstream of the groundwater of a former chemical plant. The results indicate the subsurface soil and groundwater in the area are both polluted heavily by 1,2-dichloroethane, which was caused by the chemical plant upstream with the highest concentration was 104.08 mg.kg-1 for soil sample at 8.6 m below ground and the highest concentration was 18500 microg.L-1 for groundwater. Further, a site-specific contamination conceptual model, giving consideration to the specific structure configuration of the station, was developed, and the corresponding risk calculation equation was derived. The carcinogenic risks calculated with models developed on the generic site conceptual model and derived herein on the site-specific conceptual model were compared. Both models indicate that the carcinogenic risk is significantly higher than the acceptable level which is 1 x 10(-6). The comparison result reveals that the risk calculated with the former models for soil and groundwater are higher than the one calculated with the latter models by 2 times and 1.5 times, respectively. The finding in this paper indicates that the generic risk assessment model may underestimate the risk if specific site conditions and structure configuration are not considered.

  8. Pathogen Treatment Guidance and Monitoring Approaches fro ...

    EPA Pesticide Factsheets

    On-site non-potable water reuse is increasingly used to augment water supplies, but traditional fecal indicator approaches for defining and monitoring exposure risks are limited when applied to these decentralized options. This session emphasizes risk-based modeling to define pathogen log-reduction requirements coupled with alternative targets for monitoring enabled by genomic sequencing (i.e., the microbiome of reuse systems). 1. Discuss risk-based modeling to define pathogen log-reduction requirements 2. Review alternative targets for monitoring 3. Gain an understanding of how new tools can help improve successful development of sustainable on-site non-potable water reuse Presented at the Water Wastewater Equipment Treatment & Transport Show.

  9. Model-Based Policymaking: A Framework to Promote Ethical “Good Practice” in Mathematical Modeling for Public Health Policymaking

    PubMed Central

    Boden, Lisa A.; McKendrick, Iain J.

    2017-01-01

    Mathematical models are increasingly relied upon as decision support tools, which estimate risks and generate recommendations to underpin public health policies. However, there are no formal agreements about what constitutes professional competencies or duties in mathematical modeling for public health. In this article, we propose a framework to evaluate whether mathematical models that assess human and animal disease risks and control strategies meet standards consistent with ethical “good practice” and are thus “fit for purpose” as evidence in support of policy. This framework is derived from principles of biomedical ethics: independence, transparency (autonomy), beneficence/non-maleficence, and justice. We identify ethical risks associated with model development and implementation and consider the extent to which scientists are accountable for the translation and communication of model results to policymakers so that the strengths and weaknesses of the scientific evidence base and any socioeconomic and ethical impacts of biased or uncertain predictions are clearly understood. We propose principles to operationalize a framework for ethically sound model development and risk communication between scientists and policymakers. These include the creation of science–policy partnerships to mutually define policy questions and communicate results; development of harmonized international standards for model development; and data stewardship and improvement of the traceability and transparency of models via a searchable archive of policy-relevant models. Finally, we suggest that bespoke ethical advisory groups, with relevant expertise and access to these resources, would be beneficial as a bridge between science and policy, advising modelers of potential ethical risks and providing overview of the translation of modeling advice into policy. PMID:28424768

  10. A Model of Compound Heterozygous, Loss-of-Function Alleles Is Broadly Consistent with Observations from Complex-Disease GWAS Datasets

    PubMed Central

    Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.

    2017-01-01

    The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232

  11. Use of Physiologically Based Pharmacokinetic (PBPK) Models ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.

  12. Home-Based Risk of Falling Assessment Test Using a Closed-Loop Balance Model.

    PubMed

    Ayena, Johannes C; Zaibi, Helmi; Otis, Martin J-D; Menelas, Bob-Antoine J

    2016-12-01

    The aim of this study is to improve and facilitate the methods used to assess risk of falling at home among older people through the computation of a risk of falling in real time in daily activities. In order to increase a real time computation of the risk of falling, a closed-loop balance model is proposed and compared with One-Leg Standing Test (OLST). This balance model allows studying the postural response of a person having an unpredictable perturbation. Twenty-nine volunteers participated in this study for evaluating the effectiveness of the proposed system which includes seventeen elder participants: ten healthy elderly ( 68.4 ±5.5 years), seven Parkinson's disease (PD) subjects ( 66.28 ±8.9 years), and twelve healthy young adults ( 28.27 ±3.74 years). Our work suggests that there is a relationship between OLST score and the risk of falling based on center of pressure measurement with four low cost force sensors located inside an instrumented insole, which could be predicted using our suggested closed-loop balance model. For long term monitoring at home, this system could be included in a medical electronic record and could be useful as a diagnostic aid tool.

  13. Risk mapping of dengue in Selangor and Kuala Lumpur, Malaysia.

    PubMed

    Hassan, Hafiz; Shohaimi, Shamarina; Hashim, Nor R

    2012-11-01

    Dengue fever is a recurring public health problem afflicting thousands of Malaysians annually. In this paper, the risk map for dengue fever in the peninsular Malaysian states of Selangor and Kuala Lumpur was modelled based on co-kriging and geographical information systems. Using population density and rainfall as the model's only input factors, the area with the highest risk for dengue infection was given as Gombak and Petaling, two districts located on opposite sides of Kuala Lumpur city that was also included in the risk assessment. Comparison of the modelled risk map with the dengue case dataset of 2010, obtained from the Ministry of Health of Malaysia, confirmed that the highest number of cases had been found in an area centred on Kuala Lumpur as predicted our risk profiling.

  14. Real time forest fire warning and forest fire risk zoning: a Vietnamese case study

    NASA Astrophysics Data System (ADS)

    Chu, T.; Pham, D.; Phung, T.; Ha, A.; Paschke, M.

    2016-12-01

    Forest fire occurs seriously in Vietnam and has been considered as one of the major causes of forest lost and degradation. Several studies of forest fire risk warning were conducted using Modified Nesterov Index (MNI) but remaining shortcomings and inaccurate predictions that needs to be urgently improved. In our study, several important topographic and social factors such as aspect, slope, elevation, distance to residential areas and road system were considered as "permanent" factors while meteorological data were updated hourly using near-real-time (NRT) remotely sensed data (i.e. MODIS Terra/Aqua and TRMM) for the prediction and warning of fire. Due to the limited number of weather stations in Vietnam, data from all active stations (i.e. 178) were used with the satellite data to calibrate and upscale meteorological variables. These data with finer resolution were then used to generate MNI. The only significant "permanent" factors were selected as input variables based on the correlation coefficients that computed from multi-variable regression among true fire-burning (collected from 1/2007) and its spatial characteristics. These coefficients also used to suggest appropriate weight for computing forest fire risk (FR) model. Forest fire risk model was calculated from the MNI and the selected factors using fuzzy regression models (FRMs) and GIS based multi-criteria analysis. By this approach, the FR was slightly modified from MNI by the integrated use of various factors in our fire warning and prediction model. Multifactor-based maps of forest fire risk zone were generated from classifying FR into three potential danger levels. Fire risk maps were displayed using webgis technology that is easy for managing data and extracting reports. Reported fire-burnings thereafter have been used as true values for validating the forest fire risk. Fire probability has strong relationship with potential danger levels (varied from 5.3% to 53.8%) indicating that the higher potential risk, the more chance of fire happen. By adding spatial factors to continuous daily updated remote sensing based meteo-data, results are valuable for both mapping forest fire risk zones in short and long-term and real time fire warning in Vietnam. Key words: Near-real-time, forest fire warning, fuzzy regression model, remote sensing.

  15. How are flood risk estimates affected by the choice of return-periods?

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.

    2011-12-01

    Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.

  16. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  17. Developing a Risk Model to Target High-risk Preventive Interventions for Sexual Assault Victimization among Female U.S. Army Soldiers

    PubMed Central

    Street, Amy E.; Rosellini, Anthony J.; Ursano, Robert J.; Heeringa, Steven G.; Hill, Eric D.; Monahan, John; Naifeh, James A.; Petukhova, Maria V.; Reis, Ben Y.; Sampson, Nancy A.; Bliese, Paul D.; Stein, Murray B.; Zaslavsky, Alan M.; Kessler, Ronald C.

    2016-01-01

    Sexual violence victimization is a significant problem among female U.S. military personnel. Preventive interventions for high-risk individuals might reduce prevalence, but would require accurate targeting. We attempted to develop a targeting model for female Regular U.S. Army soldiers based on theoretically-guided predictors abstracted from administrative data records. As administrative reports of sexual assault victimization are known to be incomplete, parallel machine learning models were developed to predict administratively-recorded (in the population) and self-reported (in a representative survey) victimization. Capture-recapture methods were used to combine predictions across models. Key predictors included low status, crime involvement, and treated mental disorders. Area under the Receiver Operating Characteristic curve was .83−.88. 33.7-63.2% of victimizations occurred among soldiers in the highest-risk ventile (5%). This high concentration of risk suggests that the models could be useful in targeting preventive interventions, although final determination would require careful weighing of intervention costs, effectiveness, and competing risks. PMID:28154788

  18. A proportional hazards regression model for the subdistribution with right-censored and left-truncated competing risks data

    PubMed Central

    Zhang, Xu; Zhang, Mei-Jie; Fine, Jason

    2012-01-01

    With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288

  19. Potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Satake, M.; Kusumoto, M.; Tsuchida, T.; Aokage, K.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2014-03-01

    In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.

  20. Age at exposure and attained age variations of cancer risk in the Japanese A-bomb and radiotherapy cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Uwe, E-mail: uwe.schneider@uzh.ch; Walsh, Linda

    Purpose: Phenomenological risk models for radiation-induced cancer are frequently applied to estimate the risk of radiation-induced cancers at radiotherapy doses. Such models often include the effect modification, of the main risk to radiation dose response, by age at exposure and attained age. The aim of this paper is to compare the patterns in risk effect modification by age, between models obtained from the Japanese atomic-bomb (A-bomb) survivor data and models for cancer risks previously reported for radiotherapy patients. Patterns in risk effect modification by age from the epidemiological studies of radiotherapy patients were also used to refine and extend themore » risk effect modification by age obtained from the A-bomb survivor data, so that more universal models can be presented here. Methods: Simple log-linear and power functions of age for the risk effect modification applied in models of the A-bomb survivor data are compared to risks from epidemiological studies of second cancers after radiotherapy. These functions of age were also refined and fitted to radiotherapy risks. The resulting age models provide a refined and extended functional dependence of risk with age at exposure and attained age especially beyond 40 and 65 yr, respectively, and provide a better representation than the currently available simple age functions. Results: It was found that the A-bomb models predict risk similarly to the outcomes of testicular cancer survivors. The survivors of Hodgkin’s disease show steeper variations of risk with both age at exposure and attained age. The extended models predict solid cancer risk increase as a function of age at exposure beyond 40 yr and the risk decrease as a function of attained age beyond 65 yr better than the simple models. Conclusions: The standard functions for risk effect modification by age, based on the A-bomb survivor data, predict second cancer risk in radiotherapy patients for ages at exposure prior to 40 yr and attained ages before 55 yr reasonably well. However, for larger ages, the refined and extended models can be applied to predict the risk as a function of age.« less

Top