Science.gov

Sample records for proportional hazards models

  1. Proportional Hazards Models of Graduation

    ERIC Educational Resources Information Center

    Chimka, Justin R.; Reed-Rhoads, Teri; Barker, Kash

    2008-01-01

    Survival analysis is a statistical tool used to describe the duration between events. Many processes in medical research, engineering, and economics can be described using survival analysis techniques. This research involves studying engineering college student graduation using Cox proportional hazards models. Among male students with American…

  2. Sample size calculation for the proportional hazards cure model.

    PubMed

    Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin

    2012-12-20

    In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805

  3. A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2016-01-01

    In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…

  4. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  5. Teen Court Referral, Sentencing, and Subsequent Recidivism: Two Proportional Hazards Models and a Little Speculation

    ERIC Educational Resources Information Center

    Rasmussen, Andrew

    2004-01-01

    This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…

  6. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  7. A semi-parametric generalization of the Cox proportional hazards regression model: Inference and Applications.

    PubMed

    Devarajan, Karthik; Ebrahimi, Nader

    2011-01-01

    The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike's Information Criterion is developed. We illustrate the applicability of our approach using real-life data. PMID:21076652

  8. A semi-parametric generalization of the Cox proportional hazards regression model: Inference and Applications

    PubMed Central

    Devarajan, Karthik; Ebrahimi, Nader

    2010-01-01

    The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike’s Information Criterion is developed. We illustrate the applicability of our approach using real-life data. PMID:21076652

  9. On Estimation of Covariate-Specific Residual Time Quantiles under the Proportional Hazards Model

    PubMed Central

    Crouch, Luis Alexander; May, Susanne; Chen, Ying Qing

    2015-01-01

    Estimation and inference in time-to-event analysis typically focus on hazard functions and their ratios under the Cox proportional hazards model. These hazard functions, while popular in the statistical literature, are not always easily or intuitively communicated in clinical practice, such as in the settings of patient counseling or resource planning. Expressing and comparing quantiles of event times may allow for easier understanding. In this article we focus on residual time, i.e., the remaining time-to-event at an arbitrary time t given that the event has yet to occur by t. In particular, we develop estimation and inference procedures for covariate-specific quantiles of the residual time under the Cox model. Our methods and theory are assessed by simulations, and demonstrated in analysis of two real data sets. PMID:26058825

  10. Cox Proportional Hazards Models for Modeling the Time to Onset of Decompression Sickness in Hypobaric Environments

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny

    2003-01-01

    In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.

  11. Proportional-hazards models for improving the analysis of light-water-reactor-component failure data

    SciTech Connect

    Booker, J.B.; Johnson, M.E.; Easterling, R.G.

    1981-01-01

    The reliability of a power plant component may depend on a variety of factors (or covariates). If a single regression model can be specified to relate these factors to the failure rate, then all available data can be used to estimate and test for the effects of these covariates. One such model is a proportional hazards function that is specified as a product of two terms: a nominal hazard rate that is a function of time and a second term that is a function of the covariates. The purpose of this paper is to adapt two such models to LWR valve failure rate analysis, to compare the results, and to discuss the strengths and weaknesses of these applications.

  12. Measures to assess the prognostic ability of the stratified Cox proportional hazards model.

    PubMed

    2009-02-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures would be useful in analyses of individual participant data from multiple studies, data from multi-centre studies, and in single study analysis where stratification is used to avoid making assumptions of proportional hazards. We have chosen three measures developed for the unstratified CPH model (Schemper and Henderson's V , Harrell's C-index and Royston and Sauerbrei's D), adapted them for use with the stratified CPH model and demonstrated how their values can be represented over time. Although each of these measures is promising in principle, we found the measure of explained variation V very difficult to apply when data are combined from several studies with differing durations of participant follow-up. The two other measures considered, D and the C-index, were more applicable under such circumstances. We illustrate the methods using individual participant data from several prospective epidemiological studies of chronic disease outcomes. PMID:18833567

  13. Relaxing the independent censoring assumption in the Cox proportional hazards model using multiple imputation

    PubMed Central

    Jackson, Dan; White, Ian R; Seaman, Shaun; Evans, Hannah; Baisley, Kathy; Carpenter, James

    2014-01-01

    The Cox proportional hazards model is frequently used in medical statistics. The standard methods for fitting this model rely on the assumption of independent censoring. Although this is sometimes plausible, we often wish to explore how robust our inferences are as this untestable assumption is relaxed. We describe how this can be carried out in a way that makes the assumptions accessible to all those involved in a research project. Estimation proceeds via multiple imputation, where censored failure times are imputed under user-specified departures from independent censoring. A novel aspect of our method is the use of bootstrapping to generate proper imputations from the Cox model. We illustrate our approach using data from an HIV-prevention trial and discuss how it can be readily adapted and applied in other settings. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:25060703

  14. Relaxing the independent censoring assumption in the Cox proportional hazards model using multiple imputation.

    PubMed

    Jackson, Dan; White, Ian R; Seaman, Shaun; Evans, Hannah; Baisley, Kathy; Carpenter, James

    2014-11-30

    The Cox proportional hazards model is frequently used in medical statistics. The standard methods for fitting this model rely on the assumption of independent censoring. Although this is sometimes plausible, we often wish to explore how robust our inferences are as this untestable assumption is relaxed. We describe how this can be carried out in a way that makes the assumptions accessible to all those involved in a research project. Estimation proceeds via multiple imputation, where censored failure times are imputed under user-specified departures from independent censoring. A novel aspect of our method is the use of bootstrapping to generate proper imputations from the Cox model. We illustrate our approach using data from an HIV-prevention trial and discuss how it can be readily adapted and applied in other settings. PMID:25060703

  15. On penalized likelihood estimation for a non-proportional hazards regression model.

    PubMed

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times. PMID:24791034

  16. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  17. Sparse estimation of Cox proportional hazards models via approximated information criteria.

    PubMed

    Su, Xiaogang; Wijayasinghe, Chalani S; Fan, Juanjuan; Zhang, Ying

    2016-09-01

    We propose a new sparse estimation method for Cox (1972) proportional hazards models by optimizing an approximated information criterion. The main idea involves approximation of the ℓ0 norm with a continuous or smooth unit dent function. The proposed method bridges the best subset selection and regularization by borrowing strength from both. It mimics the best subset selection using a penalized likelihood approach yet with no need of a tuning parameter. We further reformulate the problem with a reparameterization step so that it reduces to one unconstrained nonconvex yet smooth programming problem, which can be solved efficiently as in computing the maximum partial likelihood estimator (MPLE). Furthermore, the reparameterization tactic yields an additional advantage in terms of circumventing postselection inference. The oracle property of the proposed method is established. Both simulated experiments and empirical examples are provided for assessment and illustration. PMID:26873398

  18. A Bayesian proportional hazards regression model with non-ignorably missing time-varying covariates

    PubMed Central

    Bradshaw, Patrick T.; Ibrahim, Joseph G.; Gammon, Marilie D.

    2010-01-01

    Missing covariate data is common in observational studies of time to an event, especially when covariates are repeatedly measured over time. Failure to account for the missing data can lead to bias or loss of efficiency, especially when the data are non-ignorably missing. Previous work has focused on the case of fixed covariates rather than those that are repeatedly measured over the follow-up period, so here we present a selection model that allows for proportional hazards regression with time-varying covariates when some covariates may be non-ignorably missing. We develop a fully Bayesian model and obtain posterior estimates of the parameters via the Gibbs sampler in WinBUGS. We illustrate our model with an analysis of post-diagnosis weight change and survival after breast cancer diagnosis in the Long Island Breast Cancer Study Project (LIBCSP) follow-up study. Our results indicate that post-diagnosis weight gain is associated with lower all-cause and breast cancer specific survival among women diagnosed with new primary breast cancer. Our sensitivity analysis showed only slight differences between models with different assumptions on the missing data mechanism yet the complete case analysis yielded markedly different results. PMID:20960582

  19. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  20. Mortality and socio-economic differences in Denmark: a competing risks proportional hazard model.

    PubMed

    Munch, Jakob Roland; Svarer, Michael

    2005-03-01

    This paper explores how mortality is related to such socio-economic factors as education, occupation, skill level and income for the years 1992-1997 using an extensive sample of the Danish population. We employ a competing risks proportional hazard model to allow for different causes of death. This method is important as some factors have unequal (and sometimes opposite) influence on the cause-specific mortality rates. We find that the often-found inverse correlation between socio-economic status and mortality is to a large degree absent among Danish women who die of cancer. In addition, for men the negative correlation between socio-economic status and mortality prevails for some diseases, but for women we find that factors such as being married, income, wealth and education are not significantly associated with higher life expectancy. Marriage increases the likelihood of dying from cancer for women, early retirement prolongs survival for men, and homeownership increases life expectancy in general. PMID:15722260

  1. An accelerated life test model for solid lubricated bearings based on dependence analysis and proportional hazard effect

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Wang, Shaoping; Bai, Guanghan

    2014-02-01

    Solid lubricated bearings are important mechanical components in space, and accelerated life tests (ALT) of them are widely conducted. ALT model is needed to give the lifetime of solid lubricated bearings with ALT data, and former accelerated life test models of solid lubricated models are mainly statistical models, while physical models can imply an understanding of the failure mechanism and are preferred whenever possible. This paper proposes a physical model, which is called copula dependent proportional hazards model. A solid lubricated bearing is considered as a system consisting of several dependent items and Clayton copula function is used to describe the dependence. Proportional hazard effect is also considered to build the model. An ALT of solid lubricated bearing is carried out and the results show that this model is effective.

  2. Three-Component Cure Rate Model for Non-Proportional Hazards Alternative in the Design of Randomized Clinical Trials

    PubMed Central

    Kim, Haesook Teresa; Gray, Robert

    2013-01-01

    BAKGROUND Cure rate models have been extensively studied and widely used in time-to-event data in cancer clinical trials. PURPOSE Although cure rate models based on the generalized exponential distribution have been developed, they have not been used in the design of randomized cancer clinical trials, which instead have relied exclusively on two-component exponential cure rate model with a proportional hazards alternative. In some studies, the efficacy of the experimental treatment is expected to emerge some time after randomization. Since this does not conform to a proportional hazards alternative, such studies require a more flexible model to describe the alternative hypothesis. METHODS In this article, we report the study design of a phase III clinical trial of acute myeloid leukemia using a three-component exponential cure rate model to reflect the alternative hypothesis. A newly developed power calculation program that does not require proportional hazards assumption was used. RESULTS Using a custom-made three-component cure rate model as an alternative hypothesis, the proposed sample size was 409, compared with a sample size of 209 under the assumption of exponential distribution, and 228 under the proportional hazards alternative. A simulation study was performed to present the degree of power loss when the alternative hypothesis is not appropriately specified. LIMITATIONS The power calculation program used in this study is for a single analysis and does not account for group sequential tests in phase III trials. However, the loss in power is small and this was handled by inflating the sample size by 5%. CONCLUSION Misspecification of the alternative hypothesis can result in a seriously underpowered study. We report examples of clinical trials that required a custom-made alternative hypothesis to reflect a later indication of experimental treatment efficacy. The proposed three-component cure rate model could be very useful for specifying non-proportional

  3. Testing Goodness-of-Fit for the Proportional Hazards Model based on Nested Case-Control Data

    PubMed Central

    Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau

    2014-01-01

    Summary Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this paper, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen’s maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms’ Tumor Study is given to illustrate the methodology. PMID:25298193

  4. Testing goodness-of-fit for the proportional hazards model based on nested case-control data.

    PubMed

    Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau

    2014-12-01

    Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this article, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen's maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms' Tumor Study is given to illustrate the methodology. PMID:25298193

  5. Estimating treatment effect in a proportional hazards model in randomized clinical trials with all-or-nothing compliance.

    PubMed

    Li, Shuli; Gray, Robert J

    2016-09-01

    We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups. PMID:26799700

  6. Effect of type traits on functional longevity of Czech Holstein cows estimated from a Cox proportional hazards model.

    PubMed

    Zavadilová, L; Němcová, E; Stípková, M

    2011-08-01

    Relationships between conformation traits and functional longevity in Holstein cows were evaluated using survival analysis. Functional longevity was defined as the number of days between the first calving and culling; that is, length of productive life. The data set consisted of 116,369 Holstein cows that first calved from 2003 to 2008. All cows used in the analysis were scored for conformation between d 30 and d 210 of their first lactation. The data included 48% censored records. Analyses were done separately for 20 linear descriptive type traits, 6 composite traits, and height at sacrum measured in centimeters. Cox proportional hazard models were fitted to analyze data. The hazard function was described as the product of a baseline hazard function and the time-independent effects of age at first calving and sire (random), and the time-dependent effects of stage of lactation and lactation number, herd, year and season, herd size, and 305-d milk production. The strongest relationship between a composite trait and functional longevity was for dairy form, followed by udder and final score. Among the descriptive type traits, the strongest relationships with longevity were found for body condition score, angularity, traits related to udder attachment, and udder depth. Foot and leg traits showed substantially lower effect on functional longevity, and the effect of foot angle was minimal. Functional longevity declined with decreased body condition score of cows. Cows with deep udders had significantly lower functional survival compared with cows with shallow udders. In addition, weak central ligament was associated with significant reduction of cow longevity. For dairy form and angularity, cows classified as very good were the worst with respect to longevity, whereas cows classified as poor were the best. An intermediate optimum was evident for rear legs rear view and rear legs set (side view), whereas cows with sickled legs had lower longevity than cows with straighter

  7. Comparison between linear and proportional hazard models for the analysis of age at first lambing in the Ripollesa breed.

    PubMed

    Casellas, J

    2016-03-01

    Age at first lambing (AFL) plays a key role on the reproductive performance of sheep flocks, although there are no genetic selection programs accounting for this trait in the sheep industry. This could be due to the non-Gaussian distribution pattern of AFL data, which must be properly accounted for by the analytical model. In this manuscript, two different parameterizations were implemented to analyze AFL in the Ripollesa sheep breed, that is, the skew-Gaussian mixed linear model (sGML) and the piecewise Weibull proportional hazards model (PWPH). Data were available from 10 235 ewes born between 1972 and 2013 in 14 purebred Ripollesa flocks located in the north-east region of Spain. On average, ewes gave their first lambing short after their first year and a half of life (590.9 days), and within-flock averages ranged between 523.4 days and 696.6 days. Model fit was compared using the deviance information criterion (DIC; the smaller the DIC statistic, the better the model fit). Model sGML was clearly penalized (DIC=200 059), whereas model PWPH provided smaller estimates and reached the minimum DIC when one cut point was added to the initial Weibull model (DIC=132 545). The pure Weibull baseline and parameterizations with two or more cut points were discarded due to larger DIC estimates (>134 200). The only systematic effect influencing AFL was the season of birth, where summer- and fall-born ewes showed a remarkable shortening of their AFL, whereas neither birth type nor birth weight had a relevant impact on this reproductive trait. On the other hand, heritability on the original scale derived from model PWPH was high, with a model estimate place at 0.114 and its highest posterior density region ranging from 0.079 and 0.143. As conclusion, Gaussian-related mixed linear models should be avoided when analyzing AFL, whereas model PWPH must be viewed as better alternative with superior goodness of fit; moreover, the additive genetic background underlying this

  8. Inferences on relative failure rates in stratified mark-specific proportional hazards models with missing marks, with application to HIV vaccine efficacy trials

    PubMed Central

    Gilbert, Peter B.; Sun, Yanqing

    2014-01-01

    This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model in the presence of missing marks. The motivating application is preventive HIV vaccine efficacy trials, where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. The test statistics are constructed based on two-stage efficient estimators, which utilize auxiliary predictors of the missing marks. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The methods are applied to the RV144 vaccine trial. PMID:25641990

  9. Evaluation of a contract breeding management program in selected Ohio dairy herds with event-time analysis I. Cox proportional hazards models.

    PubMed

    Meadows, Cheyney; Rajala-Schultz, Päivi J; Frazer, Grant S; Meiring, Richard W; Hoblet, Kent H

    2006-12-18

    An observational study was conducted in order to assess the impact of a contract breeding program on the reproductive performance in a selected group of Ohio dairies using event-time analysis. The contract breeding program was offered by a breeding co-operative and featured tail chalking and daily evaluation of cows for insemination by co-operative technicians. Dairy employees no longer handled estrus detection activities. Between early 2002 and mid-2004, test-day records related to production and reproduction were obtained for 16,453 lactations representing 11,398 cows in a non-random sample of 31 dairies identified as well-managed client herds of the breeding co-operative. Of the 31 herds, 15 were using the contract breeding at the start of the data acquisition period, having started in the previous 2 years. The remaining 16 herds managed their own breeding program and used the co-operative for semen purchase. Cox proportional hazards modeling techniques were used to estimate the association of the contract breeding, as well as the effect of other significant predictors, with the hazard of pregnancy. Two separate Cox models were developed and compared: one that only considered fixed covariates and a second that included both fixed and time-varying covariates. Estimates of effects were expressed as the hazard ratio (HR) for pregnancy. Results of the fixed covariates model indicated that, controlling for breed, herd size, use of ovulation synchronization protocols in the herd, whether somatic cell score exceeded 4.5 prior to pregnancy or censoring, parity, calving season, and maximum test-day milk prior to pregnancy or censoring, the contract breeding program was associated with an increased hazard of pregnancy (HR=1.315; 95% CI 1.261-1.371). The results of the time-varying covariates model, which controlled for breed, herd size, use of ovulation synchronization protocols, somatic cell score above 4.5, parity, calving season, and testing season also found that the

  10. Using Swiss Webster mice to model Fetal Alcohol Spectrum Disorders (FASD): An analysis of multilevel time-to-event data through mixed-effects Cox proportional hazards models.

    PubMed

    Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita

    2016-05-15

    Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes. PMID:26765502

  11. The Effect of Ignoring Statistical Interactions in Regression Analyses Conducted in Epidemiologic Studies: An Example with Survival Analysis Using Cox Proportional Hazards Regression Model

    PubMed Central

    Vatcheva, KP; Lee, M; McCormick, JB; Rahbar, MH

    2016-01-01

    Objective To demonstrate the adverse impact of ignoring statistical interactions in regression models used in epidemiologic studies. Study design and setting Based on different scenarios that involved known values for coefficient of the interaction term in Cox regression models we generated 1000 samples of size 600 each. The simulated samples and a real life data set from the Cameron County Hispanic Cohort were used to evaluate the effect of ignoring statistical interactions in these models. Results Compared to correctly specified Cox regression models with interaction terms, misspecified models without interaction terms resulted in up to 8.95 fold bias in estimated regression coefficients. Whereas when data were generated from a perfect additive Cox proportional hazards regression model the inclusion of the interaction between the two covariates resulted in only 2% estimated bias in main effect regression coefficients estimates, but did not alter the main findings of no significant interactions. Conclusions When the effects are synergic, the failure to account for an interaction effect could lead to bias and misinterpretation of the results, and in some instances to incorrect policy decisions. Best practices in regression analysis must include identification of interactions, including for analysis of data from epidemiologic studies.

  12. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model

    PubMed Central

    Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-pei

    2016-01-01

    A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example. PMID:27120450

  13. On graphical tests for proportionality of hazards in two samples.

    PubMed

    Sahoo, Shyamsundar; Sengupta, Debasis

    2016-03-15

    In this paper, we present a class of graphical tests of the proportional hazards hypothesis for two-sample censored survival data. The proposed tests are improvements over some existing tests based on asymptotic confidence bands of certain functions of the estimated cumulative hazard functions. The new methods are based on the comparison of unrestricted estimates of the said functions and their restricted versions under the hypothesis. They combine the rigour of analytical tests with the descriptive value of plots. Monte Carlo simulations suggest that the proposed asymptotic procedures have reasonable small sample properties. The power is much higher than existing graphical tests and comparable with existing analytical tests. The method is then illustrated through the analysis of a data set on bone marrow transplantation for Leukemia patients. PMID:26522814

  14. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    PubMed

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26924122

  15. Estimating proportions of materials using mixture models

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Basu, R.

    1983-01-01

    An approach to proportion estimation based on the notion of a mixture model, appropriate parametric forms for a mixture model that appears to fit observed remotely sensed data, methods for estimating the parameters in these models, methods for labelling proportion determination from the mixture model, and methods which use the mixture model estimates as auxiliary variable values in some proportion estimation schemes are addressed.

  16. Proportional hazards regression in epidemiologic follow-up studies: an intuitive consideration of primary time scale.

    PubMed

    Cologne, John; Hsu, Wan-Ling; Abbott, Robert D; Ohishi, Waka; Grant, Eric J; Fujiwara, Saeko; Cullings, Harry M

    2012-07-01

    In epidemiologic cohort studies of chronic diseases, such as heart disease or cancer, confounding by age can bias the estimated effects of risk factors under study. With Cox proportional-hazards regression modeling in such studies, it would generally be recommended that chronological age be handled nonparametrically as the primary time scale. However, studies involving baseline measurements of biomarkers or other factors frequently use follow-up time since measurement as the primary time scale, with no explicit justification. The effects of age are adjusted for by modeling age at entry as a parametric covariate. Parametric adjustment raises the question of model adequacy, in that it assumes a known functional relationship between age and disease, whereas using age as the primary time scale does not. We illustrate this graphically and show intuitively why the parametric approach to age adjustment using follow-up time as the primary time scale provides a poor approximation to age-specific incidence. Adequate parametric adjustment for age could require extensive modeling, which is wasteful, given the simplicity of using age as the primary time scale. Furthermore, the underlying hazard with follow-up time based on arbitrary timing of study initiation may have no inherent meaning in terms of risk. Given the potential for biased risk estimates, age should be considered as the preferred time scale for proportional-hazards regression with epidemiologic follow-up data when confounding by age is a concern. PMID:22517300

  17. Evaluation of a two-part regression calibration to adjust for dietary exposure measurement error in the Cox proportional hazards model: A simulation study.

    PubMed

    Agogo, George O; van der Voet, Hilko; Van't Veer, Pieter; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-07-01

    Dietary questionnaires are prone to measurement error, which bias the perceived association between dietary intake and risk of disease. Short-term measurements are required to adjust for the bias in the association. For foods that are not consumed daily, the short-term measurements are often characterized by excess zeroes. Via a simulation study, the performance of a two-part calibration model that was developed for a single-replicate study design was assessed by mimicking leafy vegetable intake reports from the multicenter European Prospective Investigation into Cancer and Nutrition (EPIC) study. In part I of the fitted two-part calibration model, a logistic distribution was assumed; in part II, a gamma distribution was assumed. The model was assessed with respect to the magnitude of the correlation between the consumption probability and the consumed amount (hereafter, cross-part correlation), the number and form of covariates in the calibration model, the percentage of zero response values, and the magnitude of the measurement error in the dietary intake. From the simulation study results, transforming the dietary variable in the regression calibration to an appropriate scale was found to be the most important factor for the model performance. Reducing the number of covariates in the model could be beneficial, but was not critical in large-sample studies. The performance was remarkably robust when fitting a one-part rather than a two-part model. The model performance was minimally affected by the cross-part correlation. PMID:27003183

  18. Sample Size and Power for a Logrank Test and Cox Proportional Hazards Model with Multiple Groups and Strata, or a Quantitative Covariate with Multiple Strata

    PubMed Central

    Lachin, John M.

    2013-01-01

    Summary General expressions are described for the evaluation of sample size and power for the K group Mantel-logrank test or the Cox PH model score test. Under an exponential model, the method of Lachin and Foulkes [1] for the 2 group case is extended to the K ≥ 2 group case using the non-centrality parameter of the K – 1 df chi-square test. Similar results are also shown to apply to the K group score test in a Cox PH model. Lachin and Foulkes [1] employed a truncated exponential distribution to provide for a non-linear rate of enrollment. Expressions for the mean time of enrollment and the expected follow-up time in the presence of exponential losses-to-follow-up are presented. When used with the expression for the non-centrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with R years of recruitment and T years total duration. Sample size and power are also described for a stratified-adjusted K group test and for the assessment of a group by stratum interaction. Similarly computations are described for a stratified-adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. PMID:23670965

  19. Patient-specific meta-analysis for risk assessment using multivariate proportional hazards regression

    PubMed Central

    Crager, Michael R.; Tang, Gong

    2015-01-01

    We propose a method for assessing an individual patient’s risk of a future clinical event using clinical trial or cohort data and Cox proportional hazards regression, combining the information from several studies using meta-analysis techniques. The method combines patient-specific estimates of the log cumulative hazard across studies, weighting by the relative precision of the estimates, using either fixed- or random-effects meta-analysis calculations. Risk assessment can be done for any future patient using a few key summary statistics determined once and for all from each study. Generalizations of the method to logistic regression and linear models are immediate. We evaluate the methods using simulation studies and illustrate their application using real data. PMID:26664111

  20. Progress in studying scintillator proportionality: Phenomenological model

    SciTech Connect

    Bizarri, Gregory; Cherepy, Nerine; Choong, Woon-Seng; Hull, Giulia; Moses, William; Payne, Sephen; Singh, Jai; Valentine, John; Vasilev, Andrey; Williams, Richard

    2009-04-30

    We present a model to describe the origin of non-proportional dependence of scintillator light yield on the energy of an ionizing particle. The non-proportionality is discussed in terms of energy relaxation channels and their linear and non-linear dependences on the deposited energy. In this approach, the scintillation response is described as a function of the deposited energy deposition and the kinetic rates of each relaxation channel. This mathematical framework allows both a qualitative interpretation and a quantitative fitting representation of scintillation non-proportionality response as function of kinetic rates. This method was successfully applied to thallium doped sodium iodide measured with SLYNCI, a new facility using the Compton coincidence technique. Finally, attention is given to the physical meaning of the dominant relaxation channels, and to the potential causes responsible for the scintillation non-proportionality. We find that thallium doped sodium iodide behaves as if non-proportionality is due to competition between radiative recombinations and non-radiative Auger processes.

  1. NASA CONNECT: Proportionality: Modeling the Future

    NASA Technical Reports Server (NTRS)

    2000-01-01

    'Proportionality: Modeling the Future' is the sixth of seven programs in the 1999-2000 NASA CONNECT series. Produced by NASA Langley Research Center's Office of Education, NASA CONNECT is an award-winning series of instructional programs designed to enhance the teaching of math, science and technology concepts in grades 5-8. NASA CONNECT establishes the 'connection' between the mathematics, science, and technology concepts taught in the classroom and NASA research. Each program in the series supports the national mathematics, science, and technology standards; includes a resource-rich teacher guide; and uses a classroom experiment and web-based activity to complement and enhance the math, science, and technology concepts presented in the program. NASA CONNECT is FREE and the programs in the series are in the public domain. Visit our web site and register. http://connect.larc.nasa.gov 'Proportionality: Modeling the Future', students will examine how patterns, measurement, ratios, and proportions are used in the research, development, and production of airplanes.

  2. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  3. Validation of a heteroscedastic hazards regression model.

    PubMed

    Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin

    2002-03-01

    A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial. PMID:11878222

  4. Proportional Reasoning of Preservice Elementary Education Majors: An Epistemic Model of the Proportional Reasoning Construct.

    ERIC Educational Resources Information Center

    Fleener, M. Jayne

    Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…

  5. Using instrumental variables to estimate a Cox's proportional hazards regression subject to additive confounding

    PubMed Central

    Tosteson, Tor D.; Morden, Nancy E.; Stukel, Therese A.; O'Malley, A. James

    2014-01-01

    The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival. PMID:25506259

  6. Identifying and modeling safety hazards

    SciTech Connect

    DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.

    2000-03-29

    The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.

  7. Populational Growth Models Proportional to Beta Densities with Allee Effect

    NASA Astrophysics Data System (ADS)

    Aleixo, Sandra M.; Rocha, J. Leonel; Pestana, Dinis D.

    2009-05-01

    We consider populations growth models with Allee effect, proportional to beta densities with shape parameters p and 2, where the dynamical complexity is related with the Malthusian parameter r. For p>2, these models exhibit a population dynamics with natural Allee effect. However, in the case of 1models do not include this effect. In order to inforce it, we present some alternative models and investigate their dynamics, presenting some important results.

  8. Empirical study of correlated survival times for recurrent events with proportional hazards margins and the effect of correlation and censoring

    PubMed Central

    2013-01-01

    Background In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox’s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically. PMID:23883000

  9. Experiments to Determine Whether Recursive Partitioning (CART) or an Artificial Neural Network Overcomes Theoretical Limitations of Cox Proportional Hazards Regression

    NASA Technical Reports Server (NTRS)

    Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.

    1998-01-01

    New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.

  10. Computer Model Locates Environmental Hazards

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  11. Proportional and scale change models to project failures of mechanical components with applications to space station

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  12. Models of volcanic eruption hazards

    SciTech Connect

    Wohletz, K.H.

    1992-01-01

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.

  13. Unified constitutive modeling for proportional and nonproportional cyclic plasticity responses

    NASA Astrophysics Data System (ADS)

    Krishna, Shree

    Several features of cyclic plasticity, e.g. cyclic hardening/softening, ratcheting, relaxation, and their dependence on strain range, nonproportionality of loading, time, and temperature determine the stress-strain responses of materials under cyclic loading. Numerous efforts have been made in the past decades to characterize and model these responses. Many of these responses can be simulated reasonably by the existing constitutive models, but the same models would fail in simulating the structural responses, local stress-strain or global deformation. One of the reasons for this deficiency is that the constitutive models are not robust enough to simulate the cyclic plasticity responses when they interact with each other. This deficiency can be understood better or resolved by developing and validating constitutive models against a broad set of experimental responses and two or more of the responses interacting with each other. This dissertation develops a unified constitutive model by studying the cyclic plasticity features in an integrated manner and validating the model by simulating a broad set of proportional and nonproportional cyclic plasticity responses. The study demonstrates the drawbacks of the existing nonlinear kinematic hardening model originally developed by Chaboche and then develop and incorporate novel ideas into the model for improving its cyclic response simulations. The Chaboche model is modified by incorporating strain-range dependent cyclic hardening/softening through the kinematic hardening rule parameters, in addition to the conventional method of using only the isotropic hardening parameters. The nonproportional loading memory parameters of Tanaka and of Benallal and Marquis are incorporated to study the influence of nonproportionality. The model is assessed by simulating hysteresis loop shape, cyclic hardening-softening, cross-effect, cyclic relaxation, subsequent cyclic softening, and finally a series of ratcheting responses under

  14. Parametric mixture models to evaluate and summarize hazard ratios in the presence of competing risks with time-dependent hazards and delayed entry

    PubMed Central

    Lau, Bryan; Cole, Stephen R.; Gange, Stephen J.

    2010-01-01

    In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event. PMID:21337360

  15. Spatial extended hazard model with application to prostate cancer survival.

    PubMed

    Li, Li; Hanson, Timothy; Zhang, Jiajia

    2015-06-01

    This article develops a Bayesian semiparametric approach to the extended hazard model, with generalization to high-dimensional spatially grouped data. County-level spatial correlation is accommodated marginally through the normal transformation model of Li and Lin (2006, Journal of the American Statistical Association 101, 591-603), using a correlation structure implied by an intrinsic conditionally autoregressive prior. Efficient Markov chain Monte Carlo algorithms are developed, especially applicable to fitting very large, highly censored areal survival data sets. Per-variable tests for proportional hazards, accelerated failure time, and accelerated hazards are efficiently carried out with and without spatial correlation through Bayes factors. The resulting reduced, interpretable spatial models can fit significantly better than a standard additive Cox model with spatial frailties. PMID:25521422

  16. The Identification and Validation Process of Proportional Reasoning Attributes: An Application of a Proportional Reasoning Modeling Framework

    ERIC Educational Resources Information Center

    Tjoe, Hartono; de la Torre, Jimmy

    2014-01-01

    In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the…

  17. Degrees of Freedom in Modeling: Taking Certainty out of Proportion

    ERIC Educational Resources Information Center

    Peled, Irit; Bassan-Cincinatus, Ronit

    2005-01-01

    In its empirical part this paper establishes a general weak understanding of the process of applying a mathematical model. This is also evident in the way teachers regard the application of alternative sharing in their own problem solving and in relating to children's answers. The theoretical part analyses problems that are considered as…

  18. Natural phenomena hazards modeling project: extreme wind/tornado hazard models for Department of Energy sites

    SciTech Connect

    Coats, D.W.

    1984-02-01

    Lawrence Livermore National Laboratory (LLNL) has developed wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. In Phase 1, LLNL gathered information on the sites and their critical facilities, including nuclear reactors, fuel-reprocessing plants, high-level waste storage and treatment facilities, and special nuclear material facilities. In Phase 2, development of seismic and wind hazard models, was initiated. These hazard models express the annual probability that the site will experience an earthquake or wind speed greater than some specified magnitude. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. 19 references, 29 figures, 9 tables.

  19. Facilitating Children's Proportional Reasoning: A Model of Reasoning Processes and Effects of Intervention on Strategy Change.

    ERIC Educational Resources Information Center

    Fujimura, Nobuyuki

    2001-01-01

    One hundred forty fourth graders were asked to solve proportion problems about juice-mixing situations both before and after an intervention that used a manipulative model or other materials in three experiments. Results indicate different approaches appear to be necessary to facilitate children's proportional reasoning, depending on the reasoning…

  20. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  1. Modeling and Hazard Analysis Using STPA

    NASA Astrophysics Data System (ADS)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis

  2. Wind shear modeling for aircraft hazard definition

    NASA Technical Reports Server (NTRS)

    Frost, W.; Camp, D. W.; Wang, S. T.

    1978-01-01

    Mathematical models of wind profiles were developed for use in fast time and manned flight simulation studies aimed at defining and eliminating these wind shear hazards. A set of wind profiles and associated wind shear characteristics for stable and neutral boundary layers, thunderstorms, and frontal winds potentially encounterable by aircraft in the terminal area are given. Engineering models of wind shear for direct hazard analysis are presented in mathematical formulae, graphs, tables, and computer lookup routines. The wind profile data utilized to establish the models are described as to location, how obtained, time of observation and number of data points up to 500 m. Recommendations, engineering interpretations and guidelines for use of the data are given and the range of applicability of the wind shear models is described.

  3. Fitting Proportional Odds Models to Educational Data in Ordinal Logistic Regression Using Stata, SAS and SPSS

    ERIC Educational Resources Information Center

    Liu, Xing

    2008-01-01

    The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…

  4. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    ERIC Educational Resources Information Center

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  5. Frequencies as proportions: Using a teaching model based on Pirie and Kieren's model of mathematical understanding

    NASA Astrophysics Data System (ADS)

    Wright, Vince

    2014-03-01

    Pirie and Kieren (1989 For the learning of mathematics, 9(3)7-11, 1992 Journal of Mathematical Behavior, 11, 243-257, 1994a Educational Studies in Mathematics, 26, 61-86, 1994b For the Learning of Mathematics, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which learners develop their mathematical understanding. The model was adapted to create the teaching model used in the New Zealand Numeracy Development Projects (Ministry of Education, 2007). A case study of a 3-week sequence of instruction with a group of eight 12- and 13-year-old students provided the data. The teacher/researcher used folding back to materials and images and progressing from materials to imaging to number properties to assist students to develop their understanding of frequencies as proportions. The data show that successful implementation of the model is dependent on the teacher noticing and responding to the layers of understanding demonstrated by the students and the careful selection of materials, problems and situations. It supports the use of the model as a useful part of teachers' instructional strategies and the importance of pedagogical content knowledge to the quality of the way the model is used.

  6. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  7. Phylogenetic Tree Reconstruction Accuracy and Model Fit when Proportions of Variable Sites Change across the Tree

    PubMed Central

    Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.

    2010-01-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636

  8. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    PubMed

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks. PMID:24500821

  9. Application of a hazard-based visual predictive check to evaluate parametric hazard models.

    PubMed

    Huh, Yeamin; Hutmacher, Matthew M

    2016-02-01

    Parametric models used in time to event analyses are evaluated typically by survival-based visual predictive checks (VPC). Kaplan-Meier survival curves for the observed data are compared with those estimated using model-simulated data. Because the derivative of the log of the survival curve is related to the hazard--the typical quantity modeled in parametric analysis--isolation, interpretation and correction of deficiencies in the hazard model determined by inspection of survival-based VPC's is indirect and thus more difficult. The purpose of this study is to assess the performance of nonparametric hazard estimators of hazard functions to evaluate their viability as VPC diagnostics. Histogram-based and kernel-smoothing estimators were evaluated in terms of bias of estimating the hazard for Weibull and bathtub-shape hazard scenarios. After the evaluation of bias, these nonparametric estimators were assessed as a method for VPC evaluation of the hazard model. The results showed that nonparametric hazard estimators performed reasonably at the sample sizes studied with greater bias near the boundaries (time equal to 0 and last observation) as expected. Flexible bandwidth and boundary correction methods reduced these biases. All the nonparametric estimators indicated a misfit of the Weibull model when the true hazard was a bathtub shape. Overall, hazard-based VPC plots enabled more direct interpretation of the VPC results compared to survival-based VPC plots. PMID:26563504

  10. Modelling the determinants of 2000 m rowing ergometer performance: a proportional, curvilinear allometric approach.

    PubMed

    Nevill, A M; Allen, S V; Ingham, S A

    2011-02-01

    Previous studies have investigated the determinants of indoor rowing using correlations and linear regression. However, the power demands of ergometer rowing are proportional to the cube of the flywheel's (and boat's) speed. A rower's speed, therefore, should be proportional to the cube root (0.33) of power expended. Hence, the purpose of the present study was to explore the relationship between 2000 m indoor rowing speed and various measures of power of 76 elite rowers using proportional, curvilinear allometric models. The best single predictor of 2000 m rowing ergometer performance was power at VO(2max)(WVO(2max))(0.28), that explained R(2)=95.3% in rowing speed. The model realistically describes the greater increment in power required to improve a rower's performance by the same amount at higher speeds compared with that at slower speeds. Furthermore, the fitted exponent, 0.28 (95% confidence interval 0.226-0.334) encompasses 0.33, supporting the assumption that rowing speed is proportional to the cube root of power expended. Despite an R(2)=95.3%, the initial model was unable to explain "sex" and "weight-class" differences in rowing performances. By incorporating anaerobic as well as aerobic determinants, the resulting curvilinear allometric model was common to all rowers, irrespective of sex and weight class. PMID:19883389

  11. Regression model estimation of early season crop proportions: North Dakota, some preliminary results

    NASA Technical Reports Server (NTRS)

    Lin, K. K. (Principal Investigator)

    1982-01-01

    To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.

  12. Proportional Reasoning.

    ERIC Educational Resources Information Center

    Miller, Jane Lincoln; Fey, James T.

    2000-01-01

    Explores strategies to encourage students' understanding of proportional reasoning. Conducts a study to compare the proportional reasoning of students studying one of the new standards-based curricula with that of students from a control group. (ASK)

  13. Satellite image collection modeling for large area hazard emergency response

    NASA Astrophysics Data System (ADS)

    Liu, Shufan; Hodgson, Michael E.

    2016-08-01

    Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.

  14. A model for the secondary scintillation pulse shape from a gas proportional scintillation counter

    NASA Astrophysics Data System (ADS)

    Kazkaz, K.; Joshi, T. H.

    2016-03-01

    Proportional scintillation counters (PSCs), both single- and dual-phase, can measure the scintillation (S1) and ionization (S2) channels from particle interactions within the detector volume. The signal obtained from these detectors depends first on the physics of the medium (the initial scintillation and ionization), and second how the physics of the detector manipulates the resulting photons and liberated electrons. In this paper we develop a model of the detector physics that incorporates event topology, detector geometry, electric field configuration, purity, optical properties of components, and wavelength shifters. We present an analytic form of the model, which allows for general study of detector design and operation, and a Monte Carlo model which enables a more detailed exploration of S2 events. This model may be used to study systematic effects in current detectors such as energy and position reconstruction, pulse shape discrimination, event topology, dead time calculations, purity, and electric field uniformity. We present a comparison of this model with experimental data collected with an argon gas proportional scintillation counter (GPSC), operated at 20 C and 1 bar, and irradiated with an internal, collimated 55Fe source. Additionally we discuss how the model may be incorporated in Monte Carlo simulations of both GPSCs and dual-phase detectors, increasing the reliability of the simulation results and allowing for tests of the experimental data analysis algorithms.

  15. Recent Progress in Modelling the RXTE Proportional Counter Array Instrumental Background

    NASA Astrophysics Data System (ADS)

    Jahoda, K.; Strohmayer, T. E.; Smith, D. A.; Stark, M. J.

    1999-04-01

    We present recent progress in the modelling of the instrumental background for the RXTE Proportional Counter Array. Unmodelled systematic errors for faint sources are now <= 0.2 ct/sec/3 PCU in the 2-10 keV band for data selected from the front layer. We present the status of our search for additional correlations. We also present extensions of the times and conditions under which the L7 model is applicable: to early mission times (prior to April 1996) and to sources as bright as ~ 3000 count/sec/detector (comparable to the Crab).

  16. Hazardous gas model evaluation with field observations

    NASA Astrophysics Data System (ADS)

    Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.

    Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas

  17. Lahar Hazard Modeling at Tungurahua Volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Sorensen, O. E.; Rose, W. I.; Jaya, D.

    2003-04-01

    lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.

  18. Minimum risk route model for hazardous materials

    SciTech Connect

    Ashtakala, B.; Eno, L.A.

    1996-09-01

    The objective of this study is to determine the minimum risk route for transporting a specific hazardous material (HM) between a point of origin and a point of destination (O-D pair) in the study area which minimizes risk to population and environment. The southern part of Quebec is chosen as the study area and major cities are identified as points of origin and destination on the highway network. Three classes of HM, namely chlorine gas, liquefied petroleum gas (LPG), and sulfuric acid, are chosen. A minimum risk route model has been developed to determine minimum risk routes between an O-D pair by using population or environment risk units as link impedances. The risk units for each link are computed by taking into consideration the probability of an accident and its consequences on that link. The results show that between the same O-D pair, the minimum risk routes are different for various HM. The concept of risk dissipation from origin to destination on the minimum risk route has been developed and dissipation curves are included.

  19. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    PubMed Central

    2014-01-01

    Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time. PMID:25530753

  20. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    PubMed

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  1. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  2. Natural Phenomena Hazards Modeling Project. Extreme wind/tornado hazard models for Department of Energy sites. Revision 1

    SciTech Connect

    Coats, D.W.; Murray, R.C.

    1985-08-01

    Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. Final seismic hazard models have been published separately by TERA Corporation. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 19 refs., 3 figs., 9 tabs.

  3. a model based on crowsourcing for detecting natural hazards

    NASA Astrophysics Data System (ADS)

    Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.

    2015-12-01

    Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.

  4. Restenosis and the proportional neointimal response to coronary artery injury: results in a porcine model.

    PubMed

    Schwartz, R S; Huber, K C; Murphy, J G; Edwards, W D; Camrud, A R; Vlietstra, R E; Holmes, D R

    1992-02-01

    Restenosis is a reparative response to arterial injury occurring with percutaneous coronary revascularization. However, the quantitative characteristics of the relation between vessel injury and the magnitude of restenotic response remain unknown. This study was thus performed to determine the relation between severity of vessel wall injury and the thickness of resulting neointimal proliferation in a porcine model of coronary restenosis. Twenty-six porcine coronary artery segments in 24 pigs were subjected to deep arterial injury with use of overexpanded, percutaneously delivered tantalum wire coils. The vessels were studied microscopically 4 weeks after coil implantation to measure the relation between the extent of injury and the resulting neointimal thickness. For each wire site, a histopathologic score proportional to injury depth and the neointimal thicknesses at that site were determined. Mean injury scores were compared with both mean neointimal thickness and planimetry-derived area percent lumen stenosis. The severity of vessel injury strongly correlated with neointimal thickness and percent diameter stenosis (p less than 0.001). Neointimal proliferation resulting from a given wire was related to injury severity in adjacent wires, suggesting an interaction among effects at injured sites. If the results in this model apply to human coronary arteries, restenosis may depend on the degree of vessel injury sustained during angioplasty. PMID:1732351

  5. 2015 USGS Seismic Hazard Model for Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Hoover, S. M.; Ellsworth, W. L.; Llenos, A. L.; Michael, A. J.

    2015-12-01

    Over the past several years, the seismicity rate has increased markedly in multiple areas of the central U.S. Studies have tied the majority of this increased activity to wastewater injection in deep wells and hydrocarbon production. These earthquakes are induced by human activities that change rapidly based on economic and policy decisions, making them difficult to forecast. Our 2014 USGS National Seismic Hazard Model and previous models are intended to provide the long-term hazard (2% probability of exceedance in 50 years) and are based on seismicity rates and patterns observed mostly from tectonic earthquakes. However, potentially induced earthquakes were identified in 14 regions that were not included in the earthquake catalog used for constructing the 2014 model. We recognized the importance of considering these induced earthquakes in a separate hazard analysis, and as a result in April 2015 we released preliminary models that explored the impact of this induced seismicity on the hazard. Several factors are important in determining the hazard from induced seismicity: period of the catalog that optimally forecasts the next year's activity, earthquake magnitude-rate distribution, earthquake location statistics, maximum magnitude, ground motion models, and industrial drivers such as injection rates. The industrial drivers are not currently available in a form that we can implement in a 1-year model. Hazard model inputs have been evaluated by a broad group of scientists and engineers to assess the range of acceptable models. Results indicate that next year's hazard is significantly higher by more than a factor of three in Oklahoma, Texas, and Colorado compared to the long-term 2014 hazard model. These results have raised concern about the impacts of induced earthquakes on the built environment and have led to many engineering and policy discussions about how to mitigate these effects for the more than 7 million people that live near areas of induced seismicity.

  6. The high proportion of late HIV diagnoses in the USA is likely to stay: findings from a mathematical model.

    PubMed

    Xia, Qiang; Kobrak, Paul; Wiewel, Ellen W; Torian, Lucia V

    2015-01-01

    A static model of undiagnosed and diagnosed HIV infections by year of infection and year of diagnosis was constructed to examine the impact of changes in HIV case-finding and HIV incidence on the proportion of late diagnoses. With no changes in HIV case-finding or incidence, the proportion of late diagnoses in the USA would remain stable at the 2010 level, 32.0%; with a 10% increase in HIV case-finding and no changes in HIV incidence, the estimated proportion of late diagnoses would steadily decrease to 28.1% in 2019; with a 5% annual increase in HIV incidence and no changes in case-finding, the proportion would decrease to 25.2% in 2019; with a 5% annual decrease in HIV incidence and no change in case-finding, the proportion would steadily increase to 33.2% in 2019; with a 10% increase in HIV case-finding, accompanied by a 5% annual decrease in HIV incidence, the proportion would decrease from 32.0% to 30.3% in 2011, and then steadily increase to 35.2% in 2019. In all five scenarios, the proportion of late diagnoses would remain stable after 2019. The stability of the proportion is explained by the definition of the measure itself, as both the numerator and denominator are affected by HIV case-finding making the measure less sensitive. For this reason, we should cautiously interpret the proportion of late diagnoses as a marker of the success or failure of expanding HIV testing programs. PMID:25244628

  7. A high-resolution global flood hazard model

    NASA Astrophysics Data System (ADS)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  8. Natural Phenomena Hazards Modeling Project: Flood hazard models for Department of Energy sites

    SciTech Connect

    Savy, J.B.; Murray, R.C.

    1988-05-01

    For eight sites, the evaluation of flood hazards was considered in two steps. First, a screening assessment was performed to determine whether flood hazards may impact DOE operations. The screening analysis consisted of a preliminary flood hazard assessment that provides an initial estimate of the site design basis. The second step involves a review of the vulnerability of on-site facilities by the site manager; based on the results of the preliminary flood hazard assessment and a review of site operations, the manager can decide whether flood hazards should be considered a part of the design basis. The scope of the preliminary flood hazard analysis was restricted to evaluating the flood hazards that may exist in proximity to a site. The analysis does not involve an assessment of the potential of encroachment of flooding at specific on-site locations. Furthermore, the screening analysis does not consider localized flooding at a site due to precipitation (i.e., local run-off, storm sewer capacity, roof drainage). These issues were reserved for consideration by the DOE site manager. 9 refs., 18 figs.

  9. Modeling the proportion of cut slopes rock on forest roads using artificial neural network and ordinal linear regression.

    PubMed

    Babapour, R; Naghdi, R; Ghajar, I; Ghodsi, R

    2015-07-01

    Rock proportion of subsoil directly influences the cost of embankment in forest road construction. Therefore, developing a reliable framework for rock ratio estimation prior to the road planning could lead to more light excavation and less cost operations. Prediction of rock proportion was subjected to statistical analyses using the application of Artificial Neural Network (ANN) in MATLAB and five link functions of ordinal logistic regression (OLR) according to the rock type and terrain slope properties. In addition to bed rock and slope maps, more than 100 sample data of rock proportion were collected, observed by geologists, from any available bed rock of every slope class. Four predictive models were developed for rock proportion, employing independent variables and applying both the selected probit link function of OLR and Layer Recurrent and Feed forward back propagation networks of Neural Networks. In ANN, different numbers of neurons are considered for the hidden layer(s). Goodness of the fit measures distinguished that ANN models produced better results than OLR with R (2) = 0.72 and Root Mean Square Error = 0.42. Furthermore, in order to show the applicability of the proposed approach, and to illustrate the variability of rock proportion resulted from the model application, the optimum models were applied to a mountainous forest in where forest road network had been constructed in the past. PMID:26092244

  10. Checking Fine and Gray Subdistribution Hazards Model with Cumulative Sums of Residuals

    PubMed Central

    Li, Jianing; Scheike, Thomas H.; Zhang, Mei-Jie

    2015-01-01

    Recently, Fine and Gray (1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums of residuals, which validate the model in three aspects: (1) proportionality of hazard ratio, (2) the linear functional form and (3) the link function. For each assumption testing, we provide a p-values and a visualized plot against the null hypothesis using a simulation-based approach. We also consider an omnibus test for overall evaluation against any model misspecification. The proposed tests perform well in simulation studies and are illustrated with two real data examples. PMID:25421251

  11. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  12. Toward Building a New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  13. Comparison of Proportional and On/Off Solar Collector Loop Control Strategies Using a Dynamic Collector Model

    SciTech Connect

    Schiller, S. R.; Warren, M. L.; Auslander, D. M.

    1980-01-01

    Common control strategies used to regulate the flow of liquid through flat-plate solar collectors are discussed and evaluated using a dynamic collector model. Performance of all strategies is compared using different set points, flow rates, insolation levels and patterns (clear and cloudy days), and ambient temperature conditions. The unique characteristic of the dynamic collector model is that it includes effects of collector capacitance. In general, capacitance has a minimal effect on long term collector performance; however, short term temperature response and the energy =storage capability of collector capacitance are shown to play significant roles in comparing on/off and proportional controllers. Inclusion of these effects has produced considerably more realistic simulations than any generated by steady-state models. Simulations indicate relative advantages and disadvantages of both types of controllers, conditions under which each performs better, and the importance of pump cycling and controller set points on total energy collection. Results show that the turn-on set point is not always a critical factor in energy collection since collectors store energy while they warm up and during cycling; and, that proportional flow controllers provide improved energy collection only during periods of interrupted or very low insolation when the maximum possible energy collection is rela= tively low. Although proportional controllers initiate flow ·at lower insolation levels than on/off controllers, proportional controllers produce lower flow rates and higher average collector temperatures resulting in slightly lower instantaneous collection efficiencies.

  14. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    SciTech Connect

    Li Yupeng Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  15. Three multimedia models used at hazardous and radioactive waste sites

    SciTech Connect

    1996-01-01

    The report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. The study focused on three specific models: MEPAS version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. The approach to model review advocated in the study is directed to technical staff responsible for identifying, selecting and applying multimedia models for use at sites containing radioactive and hazardous materials. In the report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted.

  16. System hazards in managing laboratory test requests and results in primary care: medical protection database analysis and conceptual model

    PubMed Central

    Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John

    2015-01-01

    Objectives To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. Design A retrospective database analysis. Setting General practices in the UK and Ireland. Participants 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Main outcome measures Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. Results CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the ‘postanalytical test stage’ (n=702, 43.8%), followed closely by ‘communication outcomes issues’ (n=628, 39.1%). Conclusions Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. PMID:26614621

  17. Simulation meets reality: Chemical hazard models in real world use

    SciTech Connect

    Newsom, D.E.

    1992-01-01

    In 1989 the US Department of Transportation (DOT), Federal Emergency Management Agency (FEMA), and US Environmental Protection Agency (EPA) released a set of models for analysis of chemical hazards on personal computers. The models, known collectively as ARCHIE (Automated Resource for Chemical Hazard Incident Evaluation), have been distributed free of charge to thousands of emergency planners and analysts in state governments, Local Emergency Planning Committees (LEPCs), and industry. Under DOT and FEMA sponsorship Argonne National Laboratory (ANL) conducted workshops in 1990 and 1991 to train federal state local government, and industry personnel, both end users and other trainers, in the use of the models. As a result of these distribution and training efforts ARCHIE has received substantial use by state, local and industrial emergency management personnel.

  18. Rockfall hazard analysis using LiDAR and spatial modeling

    NASA Astrophysics Data System (ADS)

    Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho

    2010-05-01

    Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.

  19. Natural Phenomena Hazards Modeling Project: Preliminary flood hazards estimates for screening Department of Energy sites, Albuquerque Operations Office

    SciTech Connect

    McCann, M.W. Jr.; Boissonnade, A.C.

    1988-05-01

    As part of an ongoing program, Lawrence Livermore National Laboratory (LLNL) is directing the Natural Phenomena Hazards Modeling Project (NPHMP) on behalf of the Department of Energy (DOE). A major part of this effort is the development of probabilistic definitions of natural phenomena hazards; seismic, wind, and flood. In this report the first phase of the evaluation of flood hazards at DOE sites is described. Unlike seismic and wind events, floods may not present a significant threat to the operations of all DOE sites. For example, at some sites physical circumstances may exist that effectively preclude the occurrence of flooding. As a result, consideration of flood hazards may not be required as part of the site design basis. In this case it is not necessary to perform a detailed flood hazard study at all DOE sites, such as those conducted for other natural phenomena hazards, seismic and wind. The scope of the preliminary flood hazard analysis is restricted to evaluating the flood hazards that may exist in proximity to a site. The analysis does involve an assessment of the potential encroachment of flooding on-site at individual facility locations. However, the preliminary flood hazard assessment does not consider localized flooding at a site due to precipitation (i.e., local run-off, storm sewer capacity, roof drainage). These issues are reserved for consideration by the DOE site manager. 11 refs., 84 figs., 61 tabs.

  20. New class of Johnson SB distributions and its associated regression model for rates and proportions.

    PubMed

    Lemonte, Artur J; Bazán, Jorge L

    2016-07-01

    By starting from the Johnson SB distribution pioneered by Johnson (), we propose a broad class of distributions with bounded support on the basis of the symmetric family of distributions. The new class of distributions provides a rich source of alternative distributions for analyzing univariate bounded data. A comprehensive account of the mathematical properties of the new family is provided. We briefly discuss estimation of the model parameters of the new class of distributions based on two estimation methods. Additionally, a new regression model is introduced by considering the distribution proposed in this article, which is useful for situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. The regression model allows to model both location and dispersion effects. We define two residuals for the proposed regression model to assess departures from model assumptions as well as to detect outlying observations, and discuss some influence methods such as the local influence and generalized leverage. Finally, an application to real data is presented to show the usefulness of the new regression model. PMID:26659998

  1. Disproportionate Proximity to Environmental Health Hazards: Methods, Models, and Measurement

    PubMed Central

    Maantay, Juliana A.; Brender, Jean D.

    2011-01-01

    We sought to provide a historical overview of methods, models, and data used in the environmental justice (EJ) research literature to measure proximity to environmental hazards and potential exposure to their adverse health effects. We explored how the assessment of disproportionate proximity and exposure has evolved from comparing the prevalence of minority or low-income residents in geographic entities hosting pollution sources and discrete buffer zones to more refined techniques that use continuous distances, pollutant fate-and-transport models, and estimates of health risk from toxic exposure. We also reviewed analytical techniques used to determine the characteristics of people residing in areas potentially exposed to environmental hazards and emerging geostatistical techniques that are more appropriate for EJ analysis than conventional statistical methods. We concluded by providing several recommendations regarding future research and data needs for EJ assessment that would lead to more reliable results and policy solutions. PMID:21836113

  2. Assessment and Indirect Adjustment for Confounding by Smoking in Cohort Studies Using Relative Hazards Models

    PubMed Central

    Richardson, David B.; Laurier, Dominique; Schubauer-Berigan, Mary K.; Tchetgen, Eric Tchetgen; Cole, Stephen R.

    2014-01-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950–2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950–2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer—a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. PMID:25245043

  3. Recent Experiences in Aftershock Hazard Modelling in New Zealand

    NASA Astrophysics Data System (ADS)

    Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.

    2014-12-01

    The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock

  4. Variable selection in subdistribution hazard frailty models with competing risks data

    PubMed Central

    Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo

    2014-01-01

    The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872

  5. Simple model relating recombination rates and non-proportional light yield in scintillators

    SciTech Connect

    Moses, William W.; Bizarri, Gregory; Singh, Jai; Vasil'ev, Andrey N.; Williams, Richard T.

    2008-09-24

    We present a phenomenological approach to derive an approximate expression for the local light yield along a track as a function of the rate constants of different kinetic orders of radiative and quenching processes for excitons and electron-hole pairs excited by an incident {gamma}-ray in a scintillating crystal. For excitons, the radiative and quenching processes considered are linear and binary, and for electron-hole pairs a ternary (Auger type) quenching process is also taken into account. The local light yield (Y{sub L}) in photons per MeV is plotted as a function of the deposited energy, -dE/dx (keV/cm) at any point x along the track length. This model formulation achieves a certain simplicity by using two coupled rate equations. We discuss the approximations that are involved. There are a sufficient number of parameters in this model to fit local light yield profiles needed for qualitative comparison with experiment.

  6. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  7. Modelling public risk evaluation of natural hazards: a conceptual approach

    NASA Astrophysics Data System (ADS)

    Plattner, Th.

    2005-04-01

    In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.

  8. Development of hazard-compatible building fragility and vulnerability models

    USGS Publications Warehouse

    Karaca, E.; Luco, N.

    2008-01-01

    We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.

  9. FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality

    SciTech Connect

    Aberg, Daniel; Sadigh, Babak; Zhou, Fei

    2015-01-01

    This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.

  10. Modelling and calculations of the response of tissue equivalent proportional counter to charged particles.

    PubMed

    Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A

    2007-01-01

    Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder. PMID:17513858

  11. Flood hazard maps from SAR data and global hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Giustarini, Laura; Chini, Marci; Hostache, Renaud; Matgen, Patrick; Pappenberger, Florian; Bally, Phillippe

    2015-04-01

    With flood consequences likely to amplify because of growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are greatly needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method is presented to integrate global flood inundation modeling and microwave remote sensing. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers the opportunity to estimate flood non-exceedance probabilities in a robust way. The probabilities can later be attributed to historical satellite observations. SAR-derived flood extent maps with their associated non-exceedance probabilities are then combined to generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. The method can be applied to any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. We applied the method on the Severn River (UK) and on the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. An additional analysis has been performed on the Severn River, using high resolution SAR data from the COSMO-SkyMed SAR constellation, acquired for a single flood event (one flood map per day between 27/11/2012 and 4/12/2012). The results showed that it is vital to observe the peak of the flood. However, a single

  12. Statistical modeling of ground motion relations for seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2013-10-01

    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area equivalence, wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp( ɛ 0) of Joyner and Boore, Bull Seism Soc Am 83(2):469-487, 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions, which indicate the appropriate modeling of the GMR by an anisotropic

  13. Conveying Lava Flow Hazards Through Interactive Computer Models

    NASA Astrophysics Data System (ADS)

    Thomas, D.; Edwards, H. K.; Harnish, E. P.

    2007-12-01

    As part of an Information Sciences senior class project, a software package of an interactive version of the FLOWGO model was developed for the Island of Hawaii. The software is intended for use in an ongoing public outreach and hazards awareness program that educates the public about lava flow hazards on the island. The design parameters for the model allow an unsophisticated user to initiate a lava flow anywhere on the island and allow it to flow down-slope to the shoreline while displaying a timer to show the rate of advance of the flow. The user is also able to modify a range of input parameters including eruption rate, the temperature of the lava at the vent, and crystal fraction present in the lava at the source. The flow trajectories are computed using a 30 m digital elevation model for the island and the rate of advance of the flow is estimated using the average slope angle and the computed viscosity of the lava as it cools in either a channel (high heat loss) or lava tube (low heat loss). Even though the FLOWGO model is not intended to, and cannot, accurately predict the rate of advance of a tube- fed or channel-fed flow, the relative rates of flow advance for steep or flat-lying terrain convey critically important hazard information to the public: communities located on the steeply sloping western flanks of Mauna Loa may have no more than a few hours to evacuate in the face of a threatened flow from Mauna Loa's southwest rift whereas communities on the more gently sloping eastern flanks of Mauna Loa and Kilauea may have weeks to months to prepare for evacuation. Further, the model also can show the effects of loss of critical infrastructure with consequent impacts on access into and out of communities, loss of electrical supply, and communications as a result of lava flow implacement. The interactive model has been well received in an outreach setting and typically generates greater involvement by the participants than has been the case with static maps

  14. A multimodal location and routing model for hazardous materials transportation.

    PubMed

    Xie, Yuanchang; Lu, Wei; Wang, Wen; Quadrifoglio, Luca

    2012-08-15

    The recent US Commodity Flow Survey data suggest that transporting hazardous materials (HAZMAT) often involves multiple modes, especially for long-distance transportation. However, not much research has been conducted on HAZMAT location and routing on a multimodal transportation network. Most existing HAZMAT location and routing studies focus exclusively on single mode (either highways or railways). Motivated by the lack of research on multimodal HAZMAT location and routing and the fact that there is an increasing demand for it, this research proposes a multimodal HAZMAT model that simultaneously optimizes the locations of transfer yards and transportation routes. The developed model is applied to two case studies of different network sizes to demonstrate its applicability. The results are analyzed and suggestions for future research are provided. PMID:22633882

  15. Natural hazard resilient cities: the case of a SSMS model

    NASA Astrophysics Data System (ADS)

    Santos-Reyes, Jaime

    2010-05-01

    Modern society is characterised by complexity; i.e. technical systems are highly complex and highly interdependent. The nature of the interdependence amongst these systems has become an issue on increasing importance in recent years. Moreover, these systems face a number threats ranging from technical, human and natural. For example, natural hazards (earthquakes, floods, heavy snow, etc) can cause significant problems and disruption to normal life. On the other hand, modern society depends on highly interdependent infrastructures such as transport (rail, road, air, etc), telecommunications, power and water supply, etc. Furthermore, in many cases there is no single owner, operator, and regulator of such systems. Any disruption in any of the interconnected systems may cause a domino-effect. The domino-effect may occur at local, regional or at national level; or, in some cases; it may be extended across international borders. Given the above, it may be argued that society is less resilient to such events and therefore there is a need to have a system in place able to maintain risk within an acceptable range, whatever that might be. This paper presents the modelling process of the interdependences amongst "critical infrastructures" (i.e. transport, telecommunications, power & water supply, etc) for a typical city. The approach has been the application of the developed Systemic Safety Management System (SSMS) model. The main conclusion is that the SSMS model has the potentiality to be used to model interdependencies amongst the so called "critical infrastructures". It is hoped that the approach presented in this paper may help to gain a better understanding of the interdependence amongst these systems and may contribute to a resilient society when disrupted by natural hazards.

  16. Lava flow hazard at Nyiragongo volcano, D.R.C.. 1. Model calibration and hazard mapping

    NASA Astrophysics Data System (ADS)

    Favalli, Massimiliano; Chirico, Giuseppe D.; Papale, Paolo; Pareschi, Maria Teresa; Boschi, Enzo

    2009-05-01

    The 2002 eruption of Nyiragongo volcano constitutes the most outstanding case ever of lava flow in a big town. It also represents one of the very rare cases of direct casualties from lava flows, which had high velocities of up to tens of kilometer per hour. As in the 1977 eruption, which is the only other eccentric eruption of the volcano in more than 100 years, lava flows were emitted from several vents along a N-S system of fractures extending for more than 10 km, from which they propagated mostly towards Lake Kivu and Goma, a town of about 500,000 inhabitants. We assessed the lava flow hazard on the entire volcano and in the towns of Goma (D.R.C.) and Gisenyi (Rwanda) through numerical simulations of probable lava flow paths. Lava flow paths are computed based on the steepest descent principle, modified by stochastically perturbing the topography to take into account the capability of lava flows to override topographic obstacles, fill topographic depressions, and spread over the topography. Code calibration and the definition of the expected lava flow length and vent opening probability distributions were done based on the 1977 and 2002 eruptions. The final lava flow hazard map shows that the eastern sector of Goma devastated in 2002 represents the area of highest hazard on the flanks of the volcano. The second highest hazard sector in Goma is the area of propagation of the western lava flow in 2002. The town of Gisenyi is subject to moderate to high hazard due to its proximity to the alignment of fractures active in 1977 and 2002. In a companion paper (Chirico et al., Bull Volcanol, in this issue, 2008) we use numerical simulations to investigate the possibility of reducing lava flow hazard through the construction of protective barriers, and formulate a proposal for the future development of the town of Goma.

  17. Estimation of the Proportion of Underachieving Students in Compulsory Secondary Education in Spain: An Application of the Rasch Model.

    PubMed

    Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis

    2016-01-01

    There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586

  18. Estimation of the Proportion of Underachieving Students in Compulsory Secondary Education in Spain: An Application of the Rasch Model

    PubMed Central

    Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis

    2016-01-01

    There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586

  19. Preliminary deformation model for National Seismic Hazard map of Indonesia

    SciTech Connect

    Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z.; Susilo,; Efendi, Joni

    2015-04-24

    Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.

  20. Some Proposed Modifications to the 1996 California Probabilistic Hazard Model

    NASA Astrophysics Data System (ADS)

    Cao, T.; Bryant, W. A.; Rowshandel, B.; Toppozada, T.; Reichle, M. S.; Petersen, M. D.; Frankel, A. D.

    2001-12-01

    The California Department of Conservation, Division of Mines and Geology and U. S. Geological Survey are working on the revision of the 1996 California Probabilistic hazard model. Since the release of this hazard model some of the new seismological and geological studies and observations in this area have provided the basis for the revision. Important considerations of model modifications include the following: 1. using a new bilinear fault area-magnitude relation to replace the Wells and Coppersmith (1994) relation for M greater than and equal to 7.0; 2. using the Gaussian function to replace the Dirac Delta function for characteristic magnitude; 3. updating the earthquake catalog with the new M greater than and equal to 5.5 catalog from 1800 to 1999 by Toppozada et al. (2000) and the Berkeley and Caltech catalogs for 1996-2001; 4. balancing the moment release for some major A type faults; 5. adding Abrahamson and Silva attention relation with new hanging wall term; 6. considering different ratios between characteristic and Gutenberg-Richter magnitude-frequency distributions other than 50 percent and 50 percent; 7. using Monte Carlo method to sample the logic tree to produce uncertainty map of coefficient of variation (COV); 8. separating background seismicity in the vicinity of faults from other areas for different smoothing process or no smoothing at all, especially for the creeping section of the San Andreas fault and the Brawley seismic zone; 9. using near-fault variability of attenuation relations to mimic directivity; 10. modifying slip-rates for the Concord-Green Valley, Sierra Madre, and Raymond faults, adding or modifying blind thrust faults mainly in the Los Angeles Basin. These possible changes were selected with input received during several workshops that included participation of geologists and seismologists familiar with the area of concern. With the above revisions and other changes, we expect that the new model should not differ greatly from the

  1. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    SciTech Connect

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research&Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorist's actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  2. Hydraulic modeling for lahar hazards at cascades volcanoes

    USGS Publications Warehouse

    Costa, J.E.

    1997-01-01

    The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.

  3. MEASUREMENTS AND MODELS FOR HAZARDOUS CHEMICAL AND MIXED WASTES

    EPA Science Inventory

    Mixed hazardous and low-level radioactive wastes are in storage at DOE sites around the United States, awaiting treatment and disposal. These hazardous chemical wastes contain many components in multiple phases, presenting very difficult handling and treatment problems. These was...

  4. Landslide-Generated Tsunami Model for Quick Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Franz, M.; Rudaz, B.; Locat, J.; Jaboyedoff, M.; Podladchikov, Y.

    2015-12-01

    Alpine regions are likely to be areas at risk regarding to landslide-induced tsunamis, because of the proximity between lakes and potential instabilities and due to the concentration of the population in valleys and on the lakes shores. In particular, dam lakes are often surrounded by steep slopes and frequently affect the stability of the banks. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a 2.5D numerical model which aims to simulate the propagation of the landslide, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. To perform this task, the process is done in three steps. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The proper behavior of our model is demonstrated by; (1) numerical tests from Toro (2001), and (2) by comparison with a real event where the horizontal run-up distance is known (Nicolet landslide, Quebec, Canada). The model is of particular interest due to its ability to perform quickly the 2.5D geometric model of the landslide, the tsunami simulation and, consequently, the hazard assessment.

  5. Research collaboration, hazard modeling and dissemination in volcanology with Vhub

    NASA Astrophysics Data System (ADS)

    Palma Lizana, J. L.; Valentine, G. A.

    2011-12-01

    Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University

  6. Suppressing epileptic activity in a neural mass model using a closed-loop proportional-integral controller.

    PubMed

    Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli

    2016-01-01

    Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen's neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563

  7. Suppressing epileptic activity in a neural mass model using a closed-loop proportional-integral controller

    NASA Astrophysics Data System (ADS)

    Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli

    2016-06-01

    Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.

  8. Suppressing epileptic activity in a neural mass model using a closed-loop proportional-integral controller

    PubMed Central

    Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli

    2016-01-01

    Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563

  9. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  10. Frequencies as Proportions: Using a Teaching Model Based on Pirie and Kieren's Model of Mathematical Understanding

    ERIC Educational Resources Information Center

    Wright, Vince

    2014-01-01

    Pirie and Kieren (1989 "For the learning of mathematics", 9(3)7-11, 1992 "Journal of Mathematical Behavior", 11, 243-257, 1994a "Educational Studies in Mathematics", 26, 61-86, 1994b "For the Learning of Mathematics":, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which…

  11. Closed-loop control of epileptiform activities in a neural population model using a proportional-derivative controller

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Song; Wang, Mei-Li; Li, Xiao-Li; Ernst, Niebur

    2015-03-01

    Epilepsy is believed to be caused by a lack of balance between excitation and inhibitation in the brain. A promising strategy for the control of the disease is closed-loop brain stimulation. How to determine the stimulation control parameters for effective and safe treatment protocols remains, however, an unsolved question. To constrain the complex dynamics of the biological brain, we use a neural population model (NPM). We propose that a proportional-derivative (PD) type closed-loop control can successfully suppress epileptiform activities. First, we determine the stability of root loci, which reveals that the dynamical mechanism underlying epilepsy in the NPM is the loss of homeostatic control caused by the lack of balance between excitation and inhibition. Then, we design a PD type closed-loop controller to stabilize the unstable NPM such that the homeostatic equilibriums are maintained; we show that epileptiform activities are successfully suppressed. A graphical approach is employed to determine the stabilizing region of the PD controller in the parameter space, providing a theoretical guideline for the selection of the PD control parameters. Furthermore, we establish the relationship between the control parameters and the model parameters in the form of stabilizing regions to help understand the mechanism of suppressing epileptiform activities in the NPM. Simulations show that the PD-type closed-loop control strategy can effectively suppress epileptiform activities in the NPM. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473208, 61025019, and 91132722), ONR MURI N000141010278, and NIH grant R01EY016281.

  12. The influence of mapped hazards on risk beliefs: A proximity-based modeling approach

    PubMed Central

    Severtson, Dolores J.; Burt, James E.

    2013-01-01

    Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated “you live here” location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, e.g. distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples. PMID:22053748

  13. Evaluating the hazard from Siding Spring dust: Models and predictions

    NASA Astrophysics Data System (ADS)

    Christou, A.

    2014-12-01

    Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.

  14. Modelling Inland Flood Events for Hazard Maps in Taiwan

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.

    2015-12-01

    Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage

  15. Understanding Recession and Self-Rated Health with the Partial Proportional Odds Model: An Analysis of 26 Countries

    PubMed Central

    Mayer, Adam; Foster, Michelle

    2015-01-01

    Introduction Self-rated health is demonstrated to vary substantially by both personal socio-economic status and national economic conditions. However, studies investigating the combined influence of individual and country level economic indicators across several countries in the context of recent global recession are limited. This paper furthers our knowledge of the effect of recession on health at both the individual and national level. Methods Using the Life in Transition II study, which provides data from 19,759 individuals across 26 European nations, we examine the relationship between self-rated health, personal economic experiences, and macro-economic change. Data analyses include, but are not limited to, the partial proportional odds model which permits the effect of predictors to vary across different levels of our dependent variable. Results Household experiences with recession, especially a loss of staple good consumption, are associated with lower self-rated health. Most individual-level experiences with recession, such as a job loss, have relatively small negative effects on perceived health; the effect of individual or household economic hardship is strongest in high income nations. Our findings also suggest that macroeconomic growth improves self-rated health in low-income nations but has no effect in high-income nations. Individuals with the greatest probability of “good” self-rated health reside in wealthy countries ($23,910 to $50, 870 GNI per capita). Conclusion Both individual and national economic variables are predictive of self-rated health. Personal and household experiences are most consequential for self-rated health in high income nations, while macroeconomic growth is most consequential in low-income nations. PMID:26513660

  16. Hidden Markov models for estimating animal mortality from anthropogenic hazards

    EPA Science Inventory

    Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...

  17. Conceptual geoinformation model of natural hazards risk assessment

    NASA Astrophysics Data System (ADS)

    Kulygin, Valerii

    2016-04-01

    Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.

  18. Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.

    2006-12-01

    An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes

  19. The identification and validation process of proportional reasoning attributes: an application of a cognitive diagnosis modeling framework

    NASA Astrophysics Data System (ADS)

    Tjoe, Hartono; de la Torre, Jimmy

    2014-06-01

    In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the deliberation and resolution of differing views by mathematics researchers, mathematics educators, and middle school mathematics teachers of what should be learned theoretically and what can be taught practically in everyday classroom settings. We also present the initial development of proportional reasoning items as part of the two-phase validation process of the previously identified attributes. In particular, we detail in the first phase of the validation process our collaboration with middle school mathematics teachers in the creation of prototype items and the verification of each item-attribute specification in consideration of the most common ways (among many different ways) in which middle school students would have solved these prototype items themselves. In the second phase of the validation process, we elaborate our think-aloud interview procedure in the search for evidence of whether students generally solved the prototype items in the way they were expected to.

  20. The Economy-Wide Benefits of Increasing the Proportion of Students Achieving Year 12 Equivalent Education: Modelling Results.

    ERIC Educational Resources Information Center

    2003

    This study analyzed the economic benefits of an increase in the proportion of Australian students achieving a 12th-grade equivalent education. Earlier research examined the direct costs and benefits of a program that increased 12th grade equivalent education for the five-year cohort 2003-2007. This study built on that by incorporating the indirect…

  1. Modelling the costs of natural hazards in games

    NASA Astrophysics Data System (ADS)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban

  2. Expert elicitation for a national-level volcano hazard model

    NASA Astrophysics Data System (ADS)

    Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill

    2016-04-01

    The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.

  3. Strip Diagrams: Illuminating Proportions

    ERIC Educational Resources Information Center

    Cohen, Jessica S.

    2013-01-01

    Proportional reasoning is both complex and layered, making it challenging to define. Lamon (1999) identified characteristics of proportional thinkers, such as being able to understand covariance of quantities; distinguish between proportional and nonproportional relationships; use a variety of strategies flexibly, most of which are nonalgorithmic,…

  4. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    NASA Astrophysics Data System (ADS)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  5. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    SciTech Connect

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  6. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    NASA Astrophysics Data System (ADS)

    Blahut, J.; Horton, P.; Sterlacchini, S.; Jaboyedoff, M.

    2010-11-01

    Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from

  7. A time-dependent probabilistic seismic-hazard model for California

    USGS Publications Warehouse

    Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.

    2000-01-01

    For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.

  8. Modelling the costs of natural hazards in games

    NASA Astrophysics Data System (ADS)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such

  9. Large animal model for health hazard assessment of environmental pollutants

    SciTech Connect

    Chanana, A.D.; Joel, D.D.; Costa, D.L.; Janoff, A.; Susskind, H.; Weiss, R.A.

    1984-01-01

    The requirements of large animals for the experimental assessment of human health hazards associated with inhaled pollutants are discussed. Results from studies designed to elucidate mechanisms controlling pulmonary function at the organismal, cellular and molecular level are presented. It is shown that studies in large animals permit technically sophisticated approaches not feasible in small animals and not permissible in man. Use of large animals also permits serial, non-invasive determinations of structural and functional changes which may be of temporal importance. 6 references.

  10. A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards

    NASA Astrophysics Data System (ADS)

    Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.

    Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show

  11. Comparison of the historical record of earthquake hazard with seismic-hazard models for New Zealand and the continental United States

    USGS Publications Warehouse

    Stirling, M.; Petersen, M.

    2006-01-01

    We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.

  12. Potential of weight of evidence modelling for gully erosion hazard assessment in Mbire District - Zimbabwe

    NASA Astrophysics Data System (ADS)

    Dube, F.; Nhapi, I.; Murwira, A.; Gumindoga, W.; Goldin, J.; Mashauri, D. A.

    Gully erosion is an environmental concern particularly in areas where landcover has been modified by human activities. This study assessed the extent to which the potential of gully erosion could be successfully modelled as a function of seven environmental factors (landcover, soil type, distance from river, distance from road, Sediment Transport Index (STI), Stream Power Index (SPI) and Wetness Index (WI) using a GIS-based Weight of Evidence Modelling (WEM) in the Mbire District of Zimbabwe. Results show that out of the studied seven factors affecting gully erosion, five were significantly correlated (p < 0.05) to gully occurrence, namely; landcover, soil type, distance from river, STI and SPI. Two factors; WI and distance from road were not significantly correlated to gully occurrence (p > 0.05). A gully erosion hazard map showed that 78% of the very high hazard class area is within a distance of 250 m from rivers. Model validation indicated that 70% of the validation set of gullies were in the high hazard and very high hazard class. The resulting map of areas susceptible to gully erosion has a prediction accuracy of 67.8%. The predictive capability of the weight of evidence model in this study suggests that landcover, soil type, distance from river, STI and SPI are useful in creating a gully erosion hazard map but may not be sufficient to produce a valid map of gully erosion hazard.

  13. Coincidence Proportional Counter

    DOEpatents

    Manley, J H

    1950-11-21

    A coincidence proportional counter having a plurality of collecting electrodes so disposed as to measure the range or energy spectrum of an ionizing particle-emitting source such as an alpha source, is disclosed.

  14. Snakes as hazards: modelling risk by chasing chimpanzees.

    PubMed

    McGrew, William C

    2015-04-01

    Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality. PMID:25600837

  15. Neural network modeling for regional hazard assessment of debris flow in Lake Qionghai Watershed, China

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Guo, H. C.; Zou, R.; Wang, L. J.

    2006-04-01

    This paper presents a neural network (NN) based model to assess the regional hazard degree of debris flows in Lake Qionghai Watershed, China. The NN model was used as an alternative for the more conventional linear model MFCAM (multi-factor composite assessment model) in order to effectively handle the nonlinearity and uncertainty inherent in the debris flow hazard analysis. The NN model was configured using a three layer structure with eight input nodes and one output node, and the number of nodes in the hidden layer was determined through an iterative process of varying the number of nodes in the hidden layer until an optimal performance was achieved. The eight variables used to represent the eight input nodes include density of debris flow gully, degree of weathering of rocks, active fault density, area percentage of slope land greater than 25° of the total land (APL25), frequency of flooding hazards, average covariance of monthly precipitation by 10 years (ACMP10), average days with rainfall >25 mm by 10 years (25D10Y), and percentage of cultivated land with slope land greater than 25° of the total cultivated land (PCL25). The output node represents the hazard-degree ranks (HDR). The model was trained with the 35 sets of data obtained from previous researches reported in literatures, and an explicit uncertainty analysis was undertaken to address the uncertainty in model training and prediction. Before the NN model is extrapolated to Lake Qionghai Watershed, a validation case, different from the above data, is conducted. In addition, the performances of the NN model and the MFCAM were compared. The NN model predicted that the HDRs of the five sub-watersheds in the Lake Qionghai Watershed were IV, IV, III, III, and IV V, indicating that the study area covers normal hazard and severe hazard areas. Based on the NN model results, debris flow management and economic development strategies in the study are proposed for each sub-watershed.

  16. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  17. Global Volcano Model: progress towards an international co-ordinated network for volcanic hazard and risk

    NASA Astrophysics Data System (ADS)

    Loughlin, Susan

    2013-04-01

    GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.

  18. Tuberculosis reinfection rate as a proportion of total infection rate correlates with the logarithm of the incidence rate: a mathematical model

    PubMed Central

    Uys, Pieter W; van Helden, Paul D; Hargrove, John W

    2008-01-01

    In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, ρ (the reinfection factor), of the rate of first-time infection, λ. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of ρ is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502

  19. Tuberculosis reinfection rate as a proportion of total infection rate correlates with the logarithm of the incidence rate: a mathematical model.

    PubMed

    Uys, Pieter W; van Helden, Paul D; Hargrove, John W

    2009-01-01

    In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, rho (the reinfection factor), of the rate of first-time infection, lambda. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of rho is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502

  20. Adaptation through proportion.

    PubMed

    Xiong, Liyang; Shi, Wenjia; Tang, Chao

    2016-01-01

    Adaptation is a ubiquitous feature in biological sensory and signaling networks. It has been suggested that adaptive systems may follow certain simple design principles across diverse organisms, cells and pathways. One class of networks that can achieve adaptation utilizes an incoherent feedforward control, in which two parallel signaling branches exert opposite but proportional effects on the output at steady state. In this paper, we generalize this adaptation mechanism by establishing a steady-state proportionality relationship among a subset of nodes in a network. Adaptation can be achieved by using any two nodes in the sub-network to respectively regulate the output node positively and negatively. We focus on enzyme networks and first identify basic regulation motifs consisting of two and three nodes that can be used to build small networks with proportional relationships. Larger proportional networks can then be constructed modularly similar to LEGOs. Our method provides a general framework to construct and analyze a class of proportional and/or adaptation networks with arbitrary size, flexibility and versatile functional features. PMID:27526863

  1. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  2. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  3. The influence of hazard models on GIS-based regional risk assessments and mitigation policies

    USGS Publications Warehouse

    Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.

    2006-01-01

    Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.

  4. Structural estimation of a principal-agent model: moral hazard in medical insurance.

    PubMed

    Vera-Hernández, Marcos

    2003-01-01

    Despite the importance of principal-agent models in the development of modern economic theory, there are few estimations of these models. I recover the estimates of a principal-agent model and obtain an approximation to the optimal contract. The results show that out-of-pocket payments follow a concave profile with respect to costs of treatment. I estimate the welfare loss due to moral hazard, taking into account income effects. I also propose a new measure of moral hazard based on the conditional correlation between contractible and noncontractible variables. PMID:15025029

  5. Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America

    NASA Astrophysics Data System (ADS)

    Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.

    2014-12-01

    Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the

  6. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  7. New Activities of the U.S. National Tsunami Hazard Mitigation Program, Mapping and Modeling Subcommittee

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Eble, M. C.

    2013-12-01

    The U.S. National Tsunami Hazard Mitigation Program (NTHMP) is comprised of representatives from coastal states and federal agencies who, under the guidance of NOAA, work together to develop protocols and products to help communities prepare for and mitigate tsunami hazards. Within the NTHMP are several subcommittees responsible for complimentary aspects of tsunami assessment, mitigation, education, warning, and response. The Mapping and Modeling Subcommittee (MMS) is comprised of state and federal scientists who specialize in tsunami source characterization, numerical tsunami modeling, inundation map production, and warning forecasting. Until September 2012, much of the work of the MMS was authorized through the Tsunami Warning and Education Act, an Act that has since expired but the spirit of which is being adhered to in parallel with reauthorization efforts. Over the past several years, the MMS has developed guidance and best practices for states and territories to produce accurate and consistent tsunami inundation maps for community level evacuation planning, and has conducted benchmarking of numerical inundation models. Recent tsunami events have highlighted the need for other types of tsunami hazard analyses and products for improving evacuation planning, vertical evacuation, maritime planning, land-use planning, building construction, and warning forecasts. As the program responsible for producing accurate and consistent tsunami products nationally, the NTHMP-MMS is initiating a multi-year plan to accomplish the following: 1) Create and build on existing demonstration projects that explore new tsunami hazard analysis techniques and products, such as maps identifying areas of strong currents and potential damage within harbors as well as probabilistic tsunami hazard analysis for land-use planning. 2) Develop benchmarks for validating new numerical modeling techniques related to current velocities and landslide sources. 3) Generate guidance and protocols for

  8. Three multimedia models used at hazardous and radioactive waste sites

    SciTech Connect

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.

  9. Modeling framework to link climate, hydrology and flood hazards: An application to Sacramento, California

    NASA Astrophysics Data System (ADS)

    Kim, B.; David, C. H.; Druffel-Rodriguez, R.; Sanders, B. F.; Famiglietti, J. S.

    2013-12-01

    The City of Sacramento and the broader delta region may be the most flood vulnerable urbanized area in the United States. Management of flood risk here and elsewhere requires an understanding of flooding hazards, which is in turn linked to California hydrology, climate, development and flood control infrastructure. A modeling framework is presented here to make predictions of flooding hazards (e.g., depth and velocity) at the household scale (personalized flood risk information), and to study how these predictions could change under different climate change, land-use change, and infrastructure adaptation scenarios. The framework couples a statewide hydrologic model (RAPID) that predicts runoff and streamflow to a city-scale hydrodynamic model (BreZo) capable of predicting levee-breach flows and overland flows into urbanized lowlands. Application of the framework to the Sacramento area is presented here, with a focus on data needs, computational demands, results and hazard communication strategies, for selected flooding scenarios.

  10. The Integrated Nursing Pathway: An Innovative Collaborative Model to Increase the Proportion of Baccalaureate-Prepared Nurses.

    PubMed

    Goode, Colleen J; Preheim, Gayle J; Bonini, Susan; Case, Nancy K; VanderMeer, Jennifer; Iannelli, Gina

    2016-01-01

    This manuscript describes a collaborative, seamless program between a community college and a university college of nursing designed to increase the number of nurses prepared with a baccalaureate degree. The three-year Integrated Nursing Pathway provides community college students with a non-nursing associate degree, early introduction to nursing, and seamless progression through BSN education. The model includes dual admission and advising and is driven by the need for collaboration with community colleges, the need to increase the percentage of racial-ethnic minority students, the shortage of faculty, and employer preferences for BSN graduates. PMID:27209872

  11. Keep It in Proportion.

    ERIC Educational Resources Information Center

    Snider, Richard G.

    1985-01-01

    The ratio factors approach involves recognizing a given fraction, then multiplying so that units cancel. This approach, which is grounded in concrete operational thinking patterns, provides a standard for science ratio and proportion problems. Examples are included for unit conversions, mole problems, molarity, speed/density problems, and…

  12. Proportioning Cats and Rats

    ERIC Educational Resources Information Center

    Markworth, Kimberly A.

    2012-01-01

    Students may be able to set up a relevant proportion and solve through cross multiplication. However, this ability may not reflect the desired mathematical understanding of the covarying relationship that exists between two variables or the equivalent relationship that exists between two ratios. Students who lack this understanding are likely to…

  13. Selecting Proportional Reasoning Tasks

    ERIC Educational Resources Information Center

    de la Cruz, Jessica A.

    2013-01-01

    With careful consideration given to task selection, students can construct their own solution strategies to solve complex proportional reasoning tasks while the teacher's instructional goals are still met. Several aspects of the tasks should be considered including their numerical structure, context, difficulty level, and the strategies they are…

  14. Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment

    SciTech Connect

    Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank

    2008-11-01

    Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

  15. Modeling exposure to persistent chemicals in hazard and risk assessment.

    PubMed

    Cowan-Ellsberry, Christina E; McLachlan, Michael S; Arnot, Jon A; Macleod, Matthew; McKone, Thomas E; Wania, Frank

    2009-10-01

    Fate and exposure modeling has not, thus far, been explicitly used in the risk profile documents prepared for evaluating the significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of persistent organic pollutants (POP) and persistent, bioaccumulative, and toxic (PBT) chemicals in the environment. The goal of this publication is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include 1) benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk; 2) directly estimating the exposure of the environment, biota, and humans to provide information to complement measurements or where measurements are not available or are limited; 3) to identify the key processes and chemical or environmental parameters that determine the exposure, thereby allowing the effective prioritization of research or measurements to improve the risk profile; and 4) forecasting future time trends, including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and

  16. Early hominin limb proportions.

    PubMed

    Richmond, Brian G; Aiello, Leslie C; Wood, Bernard A

    2002-10-01

    Recent analyses and new fossil discoveries suggest that the evolution of hominin limb length proportions is complex, with evolutionary reversals and a decoupling of proportions within and between limbs. This study takes into account intraspecific variation to test whether or not the limb proportions of four early hominin associated skeletons (AL 288-1, OH 62, BOU-VP-12/1, and KNM-WT 15000) can be considered to be significantly different from one another. Exact randomization methods were used to compare the differences between pairs of fossil skeletons to the differences observed between all possible pairs of individuals within large samples of Gorilla gorilla, Pan troglodytes, Pongo pygmaeus, and Homo sapiens. Although the difference in humerofemoral proportions between OH 62 and AL 288-1 does not exceed variation in the extant samples, it is rare. When humerofemoral midshaft circumferences are compared, the difference between OH 62 and AL 288-1 is fairly common in extant species. This, in combination with error associated with the limb lengths estimates, suggests that it may be premature to consider H. (or Australopithecus) habilis as having more apelike limb proportions than those in A. afarensis. The humerofemoral index of BOU-VP-12/1 differs significantly from both OH 62 and AL 288-1, but not from KNM-WT 15000. Published length estimates, if correct, suggest that the relative forearm length of BOU-VP-12/1 is unique among hominins, exceeding those of the African apes and resembling the proportions in Pongo. Evidence that A. afarensis exhibited a less apelike upper:lower limb design than A. africanus (and possibly H. habilis) suggests that, if A. afarensis is broadly ancestral to A. africanus, the latter did not simply inherit primitive morphology associated with arboreality, but is derived in this regard. The fact that the limb proportions of OH 62 (and possibly KNM-ER 3735) are no more human like than those of AL 288-1 underscores the primitive body design of H

  17. Proposal for a probabilistic local level landslide hazard assessment model: The case of Suluktu, Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Vidar Vangelsten, Bjørn; Fornes, Petter; Cepeda, Jose Mauricio; Ekseth, Kristine Helene; Eidsvig, Unni; Ormukov, Cholponbek

    2015-04-01

    Landslides are a significant threat to human life and the built environment in many parts of Central Asia. To improve understanding of the magnitude of the threat and propose appropriate risk mitigation measures, landslide hazard mapping is needed both at regional and local level. Many different approaches for landslide hazard mapping exist depending on the scale and purpose of the analysis and what input data are available. This paper presents a probabilistic local scale landslide hazard mapping methodology for rainfall triggered landslides, adapted to the relatively dry climate found in South-Western Kyrgyzstan. The GIS based approach makes use of data on topography, geology, land use and soil characteristics to assess landslide susceptibility. Together with a selected rainfall scenario, these data are inserted into a triggering model based on an infinite slope formulation considering pore pressure and suction effects for unsaturated soils. A statistical model based on local landslide data has been developed to estimate landslide run-out. The model links the spatial extension of the landslide to land use and geological features. The model is tested and validated for the town of Suluktu in the Ferghana Valley in South-West Kyrgyzstan. Landslide hazard is estimated for the urban area and the surrounding hillsides. The case makes use of a range of data from different sources, both remote sensing data and in-situ data. Public global data sources are mixed with case specific data obtained from field work. The different data and models have various degrees of uncertainty. To account for this, the hazard model has been inserted into a Monte Carlo simulation framework to produce a probabilistic landslide hazard map identifying areas with high landslide exposure. The research leading to these results has received funding from the European Commission's Seventh Framework Programme [FP7/2007-2013], under grant agreement n° 312972 "Framework to integrate Space-based and in

  18. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    USGS Publications Warehouse

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  19. Efficient pan-European flood hazard modelling through a combination of statistical and physical models

    NASA Astrophysics Data System (ADS)

    Paprotny, Dominik; Morales Nápoles, Oswaldo

    2016-04-01

    Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.

  20. Modelling in infectious diseases: between haphazard and hazard.

    PubMed

    Neuberger, A; Paul, M; Nizar, A; Raoult, D

    2013-11-01

    Modelling of infectious diseases is difficult, if not impossible. No epidemic has ever been truly predicted, rather than being merely noticed when it was already ongoing. Modelling the future course of an epidemic is similarly tenuous, as exemplified by ominous predictions during the last influenza pandemic leading to exaggerated national responses. The continuous evolution of microorganisms, the introduction of new pathogens into the human population and the interactions of a specific pathogen with the environment, vectors, intermediate hosts, reservoir animals and other microorganisms are far too complex to be predictable. Our environment is changing at an unprecedented rate, and human-related factors, which are essential components of any epidemic prediction model, are difficult to foresee in our increasingly dynamic societies. Any epidemiological model is, by definition, an abstraction of the real world, and fundamental assumptions and simplifications are therefore required. Indicator-based surveillance methods and, more recently, Internet biosurveillance systems can detect and monitor outbreaks of infections more rapidly and accurately than ever before. As the interactions between microorganisms, humans and the environment are too numerous and unexpected to be accurately represented in a mathematical model, we argue that prediction and model-based management of epidemics in their early phase are quite unlikely to become the norm. PMID:23879334

  1. Comparing the European (SHARE) and the reference Italian seismic hazard models

    NASA Astrophysics Data System (ADS)

    Visini, Francesco; Meletti, Carlo; D'Amico, Vera; Rovida, Andrea; Stucchi, Massimiliano

    2016-04-01

    A probabilistic seismic hazard evaluation for Europe has been recently released by the SHARE project (www.share-eu.org, Giardini et al., 2013; Woessner et al., 2015). A comparison between SHARE results for Italy and the official Italian seismic hazard model (MPS04, Stucchi et al., 2011), currently adopted by the building code, has been carried on to identify the main input elements that produce the differences between the two models. The SHARE model shows increased expected values (up to 70%) with respect to the MPS04 model for PGA with 10% probability of exceedance in 50 years. However, looking in detail at all output parameters of both the models, we observe that for spectral periods greater than 0.3 s, the reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This behaviour is mainly guided by the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to older GMPEs used in MPS04. Another important set of tests consisted in analyzing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only used area sources. Results show that, besides the strong impact of the GMPEs, the differences on the seismic hazard estimates among the three source models are relevant and, in particular, for some selected test sites, the fault-based model returns lowest estimates of seismic hazard. This result arises questions on the completeness of the fault database, their parameterization and assessment of activity rates as well as on the impact of the threshold magnitude between faults and background. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard

  2. Modeling and Prediction of Wildfire Hazard in Southern California, Integration of Models with Imaging Spectrometry

    NASA Technical Reports Server (NTRS)

    Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)

    2001-01-01

    Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated

  3. Multiwire proportional chamber development

    NASA Technical Reports Server (NTRS)

    Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.

    1973-01-01

    The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.

  4. Incorporating induced seismicity in the 2014 United States National Seismic Hazard Model: results of the 2014 workshop and sensitivity studies

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.

    2015-01-01

    The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after

  5. Monitor proportional counter

    NASA Technical Reports Server (NTRS)

    Weisskopf, M. C.

    1979-01-01

    An Uhuru class Ar-CO2 gas filled proportional counter sealed with a 1.5 mil beryllium window and sensitive to X-rays in the energy bandwidth from 1.5 to 22 keV is presented. This device is coaligned with the X-ray telescope aboard the Einstein Observatory and takes data as a normal part of the Observatory operations.

  6. Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site

    NASA Astrophysics Data System (ADS)

    Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.

    2012-04-01

    The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological

  7. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    PubMed

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26194861

  8. Modeling downwind hazards after an accidental release of chlorine trifluoride

    SciTech Connect

    Lombardi, D.A.; Cheng, Meng-Dawn

    1996-05-01

    A module simulating ClF{sub 3} chemical reactions with water vapor and thermodynamic processes in the atmosphere after an accidental release has been developed. This module was liked to the HGSYSTEM. Initial model runs simulate the rapid formation of HF and ClO{sub 2} after an atmospheric release of ClF{sub 3}. At distances beyond the first several meters from the release point, HF and ClO{sub 2} concentrations pose a greater threat to human health than do ClF{sub 3} concentrations. For most of the simulations, ClF{sub 3} concentrations rapidly fall below the IDLH. Fro releases occurring in ambient conditions with low relative humidity and/or ambient temperature, ClF{sub 3} concentrations exceed the IDLH up to almost 500 m. The performance of this model needs to be determined for potential release scenarios that will be considered. These release scenarios are currently being developed.

  9. Large area application of a corn hazard model. [Soviet Union

    NASA Technical Reports Server (NTRS)

    Ashburn, P.; Taylor, T. W. (Principal Investigator)

    1981-01-01

    An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.

  10. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  11. Modelling clustering of natural hazard phenomena and the effect on re/insurance loss perspectives

    NASA Astrophysics Data System (ADS)

    Khare, S.; Bonazzi, A.; Mitas, C.; Jewson, S.

    2015-06-01

    In this paper, we present a conceptual framework for modelling clustered natural hazards that makes use of historical event data as a starting point. We review a methodology for modelling clustered natural hazard processes called Poisson mixtures. This methodology is suited to the application we have in mind as it naturally models processes that yield cross-event correlation (unlike homogeneous Poisson models), has a high degree of tunability to the problem at hand and is analytically tractable. Using European windstorm data as an example, we provide evidence that the historical data show strong evidence of clustering. We then develop Poisson and Clustered simulation models for the data, demonstrating clearly the superiority of the Clustered model which we have implemented using the Poisson mixture approach. We then discuss the implications of including clustering in models of prices of catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of unique insights into the impact clustering has on modelled catXL contract prices. The simple modelling example in this paper provides a clear and insightful starting point for practitioners tackling more complex natural hazard risk problems.

  12. Landslides! Engaging students in natural hazards and STEM principles through the exploration of landslide analog models

    NASA Astrophysics Data System (ADS)

    Gochis, E. E.; Lechner, H. N.; Brill, K. A.; Lerner, G.; Ramos, E.

    2014-12-01

    Graduate students at Michigan Technological University developed the "Landslides!" activity to engage middle & high school students participating in summer engineering programs in a hands-on exploration of geologic engineering and STEM (Science, Technology, Engineering and Math) principles. The inquiry-based lesson plan is aligned to Next Generation Science Standards and is appropriate for 6th-12th grade classrooms. During the activity students focus on the factors contributing to landslide development and engineering practices used to mitigate hazards of slope stability hazards. Students begin by comparing different soil types and by developing predictions of how sediment type may contribute to differences in slope stability. Working in groups, students then build tabletop hill-slope models from the various materials in order to engage in evidence-based reasoning and test their predictions by adding groundwater until each group's modeled slope fails. Lastly students elaborate on their understanding of landslides by designing 'engineering solutions' to mitigate the hazards observed in each model. Post-evaluations from students demonstrate that they enjoyed the hands-on nature of the activity and the application of engineering principles to mitigate a modeled natural hazard.

  13. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  14. Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)

    NASA Astrophysics Data System (ADS)

    Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan

    2014-09-01

    Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.

  15. DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...

  16. Delta method and bootstrap in linear mixed models to estimate a proportion when no event is observed: application to intralesional resection in bone tumor surgery.

    PubMed

    Francq, Bernard G; Cartiaux, Olivier

    2016-09-10

    Resecting bone tumors requires good cutting accuracy to reduce the occurrence of local recurrence. This issue is considerably reduced with a navigated technology. The estimation of extreme proportions is challenging especially with small or moderate sample sizes. When no success is observed, the commonly used binomial proportion confidence interval is not suitable while the rule of three provides a simple solution. Unfortunately, these approaches are unable to differentiate between different unobserved events. Different delta methods and bootstrap procedures are compared in univariate and linear mixed models with simulations and real data by assuming the normality. The delta method on the z-score and parametric bootstrap provide similar results but the delta method requires the estimation of the covariance matrix of the estimates. In mixed models, the observed Fisher information matrix with unbounded variance components should be preferred. The parametric bootstrap, easier to apply, outperforms the delta method for larger sample sizes but it may be time costly. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26990871

  17. A Time\\-Dependent Probabilistic Seismic Hazard Model For The Central Apennines (Italy)

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.

    2004-12-01

    Earthquake hazard in the Central Apennines, Italy has been investigated using time-independent probabilistic (simple Poissonian) and time-dependent probabilistic (renewal) models. We developed a hazard model that defines the sources for potential earthquakes and earthquake recurrence relations. Both characteristic and floating earthquake hypothesismodel is used for the Central Apennines faults (M>5.9). The models for each fault segment are developed based on recent geological and geophysical studies, as well as historical earthquakes. Historical seismicity, active faulting framework and inferred seismogenic behavior (expressed in terms of slip rates, recurrence intervals, elapsed times) constitute the main quantitative information used in the model assignment. We calculate the background hazard from Mw 4.6-5.9 earthquakes using the historical catalogs of CPTI04 (Working Group, 2004) and obtain a-value distribution over the study area. This is because the earthquakes occur in areas where they cannot be assigned to a particular fault. Therefore, their recurrence is considered by the historic occurrence of earthquakes, calculating the magnitude-frequency distributions. We found good agreement between expected earthquake rates from historical earthquake catalog and earthquake source model. The probabilities are obtained from time-dependent models characterized by a Brownian Passage Time function on recurrence interval with aperiodicity of 0.5. Earthquake hazard is quantified in terms of peak ground acceleration and spectral accelerations for natural periods of 0.2 and 1.0 seconds. The ground motions are determined for rock conditions. We have used the attenuation relationships obtained for the Apennines by Malagnini et al. (2000) together with the relationships predicted from Sabetta and Pugliese (1996) and Ambraseys et al. (1996) for the Italian and European regions, respectively. Generally, time dependent hazard is increased and the peaks appear to shift to the ESE

  18. Exploring the Differences Between the European (SHARE) and the Reference Italian Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.

    2014-12-01

    The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the

  19. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    NASA Astrophysics Data System (ADS)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  20. Building a risk-targeted regional seismic hazard model for South-East Asia

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  1. Neotectonic deformation models for probabilistic seismic hazard: a study in the External Dinarides

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-06-01

    In Europe, common input data types for seismic hazard evaluation include earthquake catalogues, seismic zonation models and ground motion models, all with well-constrained epistemic uncertainties. In contrast, neotectonic deformation models and their related uncertainties are rarely considered in earthquake forecasting and seismic hazard studies. In this study, for the first time in Europe, we developed a seismic hazard model based exclusively on active fault and geodynamic deformation models. We applied it to the External Dinarides, a slow-deforming fold-and-thrust belt in the Central Mediterranean. The two deformation models furnish consistent long-term earthquake rates above the Mw 4.7 threshold on a latitude/longitude grid with 0.2° spacing. Results suggest that the use of deformation models is a valid alternative to empirical-statistical approaches in earthquake forecasting in slow-deforming regions of Europe. Furthermore, we show that the variability of different deformation models has a comparable effect on the peak ground motion acceleration uncertainty as do the ground motion prediction equations.

  2. Rainfall Hazards Prevention based on a Local Model Forecasting System

    NASA Astrophysics Data System (ADS)

    Buendia, F.; Ojeda, B.; Buendia Moya, G.; Tarquis, A. M.; Andina, D.

    2009-04-01

    Rainfall is one of the most important events of human life and society. Some rainfall phenomena like floods or hailstone are a threat to the agriculture, business and even life. However in the meteorological observatories there are methods to detect and alarm about this kind of events, nowadays the prediction techniques based on synoptic measurements need to be improved to achieve medium term feasible forecasts. Any deviation in the measurements or in the model description makes the forecast to diverge in time from the real atmosphere evolution. In this paper the advances in a local rainfall forecasting system based on time series estimation with General Regression Neural Networks are presented. The system is introduced, explaining the measurements, methodology and the current state of the development. The aim of the work is to provide a complementary criteria to the current forecast systems, based on the daily atmosphere observation and tracking over a certain place.

  3. How new fault data and models affect seismic hazard results? Examples from southeast Spain

    NASA Astrophysics Data System (ADS)

    Gaspar-Escribano, Jorge M.; Belén Benito, M.; Staller, Alejandra; Ruiz Barajas, Sandra; Quirós, Ligia E.

    2016-04-01

    In this work, we study the impact of different approaches to incorporate faults in a seismic hazard assessment analysis. Firstly, we consider two different methods to distribute the seismicity of the study area into faults and area-sources, based on magnitude partitioning and on moment rate distribution. We use two recurrence models to characterize fault activity: the characteristic earthquake model and the modified Gutenberg-Richter exponential frequency-magnitude distribution. An application of the work is developed in the region of Murcia (southeastern Spain), due to the availability of fault data and because is one of the areas in Spain with higher seismic hazard. The parameters used to model fault sources are derived from paleoseismological and field studies obtained from the literature and online repositories. Additionally, for some significant faults only, geodetically-derived slip rates are used to compute recurrence periods. The results of all the seismic hazard computations carried out using different models and data are represented in maps of expected peak ground accelerations for a return period of 475 years. Maps of coefficients of variation are presented to constraint the variability of the end-results to different input models and values. Additionally, the different hazard maps obtained in this study are compared with the seismic hazard maps obtained in previous work for the entire Spanish territory and more specifically for the region of Murcia. This work is developed in the context of the MERISUR project (ref. CGL2013-40492-R), with funding from the Spanish Ministry of Economy and Competitiveness.

  4. Seismic hazard assessment for Myanmar: Earthquake model database, ground-motion scenarios, and probabilistic assessments

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.

    2015-12-01

    We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

  5. Proportional counter radiation camera

    DOEpatents

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  6. Mass movement hazard assessment model in the slope profile

    NASA Astrophysics Data System (ADS)

    Colangelo, A. C.

    2003-04-01

    The central aim of this work is to assess the spatial behaviour of critical depths for slope stability and the behaviour of their correlated variables in the soil-regolith transition along slope profiles over granite, migmatite and mica-schist parent materials in an humid tropical environment. In this way, we had making measures of shear strength for residual soils and regolith materials with soil "Cohron Sheargraph" apparatus and evaluated the shear stress tension behaviour at soil-regolith boundary along slope profiles, in each referred lithology. In the limit equilibrium approach applied here we adapt the infinite slope model for slope analysis in whole slope profile by means of finite element solution like in Fellenius or Bishop methods. In our case, we assume that the potential rupture surface occurs at soil-regolith or soil-rock boundary in slope material. For each slice, the factor of safety was calculated considering the value of shear strength (cohesion and friction) of material, soil-regolith boundary depth, soil moisture level content, slope gradient, top of subsurface flow gradient, apparent soil bulk density. The correlations showed the relative weight of cohesion, internal friction angle, apparent bulk density of soil materials and slope gradient variables with respect to the evaluation of critical depth behaviour for different simulated soil moisture content levels at slope profile scale. Some important results refer to the central role of behaviour of soil bulk-density variable along slope profile during soil evolution and in present day, because the intense clay production, mainly Kaolinite and Gibbsite at B and C-horizons, in the humid tropical environment. A increase in soil clay content produce a fall of friction angle and bulk density of material, specially when some montmorillonite or illite clay are present. We have observed too at threshold conditions, that a slight change in soil bulk-density value may disturb drastically the equilibrium of

  7. Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards

    SciTech Connect

    Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J

    2007-11-26

    This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.

  8. An animal model to study toxicity of central nervous system therapy for childhood acute lymphoblastic leukemia: Effects on growth and craniofacial proportion

    SciTech Connect

    Schunior, A.; Zengel, A.E.; Mullenix, P.J.; Tarbell, N.J.; Howes, A.; Tassinari, M.S. )

    1990-10-15

    Many long term survivors of childhood acute lymphoblastic leukemia have short stature, as well as craniofacial and dental abnormalities, as side effects of central nervous system prophylactic therapy. An animal model is presented to assess these adverse effects on growth. Cranial irradiation (1000 cGy) with and without prednisolone (18 mg/kg i.p.) and methotrexate (2 mg/kg i.p.) was administered to 17- and 18-day-old Sprague-Dawley male and female rats. Animals were weighed 3 times/week. Final body weight and body length were measured at 150 days of age. Femur length and craniofacial dimensions were measured directly from the bones, using calipers. For all exposed groups there was a permanent suppression of weight gain with no catch-up growth or normal adolescent growth spurt. Body length was reduced for all treated groups, as were the ratios of body weight to body length and cranial length to body length. Animals subjected to cranial irradiation exhibited microcephaly, whereas those who received a combination of radiation and chemotherapy demonstrated altered craniofacial proportions in addition to microcephaly. Changes in growth patterns and skeletal proportions exhibited sexually dimorphic characteristics. The results indicate that cranial irradiation is a major factor in the growth failure in exposed rats, but chemotherapeutic agents contribute significantly to the outcome of growth and craniofacial dimensions.

  9. Masked Proportional Routing

    NASA Technical Reports Server (NTRS)

    Wolpert, David

    2004-01-01

    Masked proportional routing is an improved procedure for choosing links between adjacent nodes of a network for the purpose of transporting an entity from a source node ("A") to a destination node ("B"). The entity could be, for example, a physical object to be shipped, in which case the nodes would represent waypoints and the links would represent roads or other paths between waypoints. For another example, the entity could be a message or packet of data to be transmitted from A to B, in which case the nodes could be computer-controlled switching stations and the links could be communication channels between the stations. In yet another example, an entity could represent a workpiece while links and nodes could represent, respectively, manufacturing processes and stages in the progress of the workpiece towards a finished product. More generally, the nodes could represent states of an entity and the links could represent allowed transitions of the entity. The purpose of masked proportional routing and of related prior routing procedures is to schedule transitions of entities from their initial states ("A") to their final states ("B") in such a manner as to minimize a cost or to attain some other measure of optimality or efficiency. Masked proportional routing follows a distributed (in the sense of decentralized) approach to probabilistically or deterministically choosing the links. It was developed to satisfy a need for a routing procedure that 1. Does not always choose the same link(s), even for two instances characterized by identical estimated values of associated cost functions; 2. Enables a graceful transition from one set of links to another set of links as the circumstances of operation of the network change over time; 3. Is preferably amenable to separate optimization of different portions of the network; 4. Is preferably usable in a network in which some of the routing decisions are made by one or more other procedure(s); 5. Preferably does not cause an

  10. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    PubMed

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. PMID:23702378

  11. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food

    PubMed Central

    Mura, Ivan; Malakar, Pradeep K.; Walshaw, John; Peck, Michael W.; Barker, G. C.

    2015-01-01

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137

  12. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.

    PubMed

    Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C

    2016-01-01

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137

  13. A probabilistic tornado wind hazard model for the continental United States

    SciTech Connect

    Hossain, Q; Kimball, J; Mensing, R; Savy, J

    1999-04-19

    A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.

  14. Load proportional safety brake

    NASA Technical Reports Server (NTRS)

    Cacciola, M. J.

    1979-01-01

    This brake is a self-energizing mechanical friction brake and is intended for use in a rotary drive system. It incorporates a torque sensor which cuts power to the power unit on any overload condition. The brake is capable of driving against an opposing load or driving, paying-out, an aiding load in either direction of rotation. The brake also acts as a no-back device when torque is applied to the output shaft. The advantages of using this type of device are: (1) low frictional drag when driving; (2) smooth paying-out of an aiding load with no runaway danger; (3) energy absorption proportional to load; (4) no-back activates within a few degrees of output shaft rotation and resets automatically; and (5) built-in overload protection.

  15. Gated strip proportional detector

    DOEpatents

    Morris, Christopher L.; Idzorek, George C.; Atencio, Leroy G.

    1987-01-01

    A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10.sup.6. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.

  16. Gated strip proportional detector

    DOEpatents

    Morris, C.L.; Idzorek, G.C.; Atencio, L.G.

    1985-02-19

    A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10/sup 6/. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.

  17. Masked Proportional Routing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H. (Inventor)

    2003-01-01

    Distributed approach for determining a path connecting adjacent network nodes, for probabilistically or deterministically transporting an entity, with entity characteristic mu from a source node to a destination node. Each node i is directly connected to an arbitrary number J(mu) of nodes, labeled or numbered j=jl, j2, .... jJ(mu). In a deterministic version, a J(mu)-component baseline proportion vector p(i;mu) is associated with node i. A J(mu)-component applied proportion vector p*(i;mu) is determined from p(i;mu) to preclude an entity visiting a node more than once. Third and fourth J(mu)-component vectors, with components iteratively determined by Target(i;n(mu);mu),=alpha(mu).Target(i;n(mu)-1;mu)j+beta(mu).p* (i;mu)j and Actual(i;n(mu);+a(mu)j. Actual(i;n(mu)-l;mu)j+beta(mu).Sent(i;j'(mu);n(mu)-1;mu)j, are computed, where n(mu) is an entity sequence index and alpha(mu) and beta(mu) are selected numbers. In one embodiment, at each node i, the node j=j'(mu) with the largest vector component difference, Target(i;n(mu);mu)j'- Actual (i;n(mu);mu)j'. is chosen for the next link for entity transport, except in special gap circumstances, where the same link is optionally used for transporting consecutively arriving entities. The network nodes may be computer-controlled routers that switch collections of packets, frames, cells or other information units. Alternatively, the nodes may be waypoints for movement of physical items in a network or for transformation of a physical item. The nodes may be states of an entity undergoing state transitions, where allowed transitions are specified by the network and/or the destination node.

  18. Impact of fault models on probabilistic seismic hazard assessment: the example of the West Corinth rift.

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène

    2016-04-01

    Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically

  19. Developments in EPA`s air dispersion modeling for hazardous/toxic releases

    SciTech Connect

    Touma, J.S.

    1995-12-31

    Title 3 of the 1990 Clean Air Act Amendments (CAAA) lists many chemicals as hazardous air pollutants and requires establishing regulations to prevent their accidental release, and to minimize the consequence, if any such releases occur. With the large number of potential release scenarios that are associated with these chemicals, there is a need for a systematic approach for applying air dispersion models to estimate impact. Because some chemicals may form dense gas clouds upon release, and dispersion models that can simulate these releases are complex, EPA has paid attention to the development of modeling tools and guidance on the use of models that can address these types of releases.

  20. The Pedestrian Evacuation Analyst: geographic information systems software for modeling hazard evacuation potential

    USGS Publications Warehouse

    Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.

    2014-01-01

    Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a

  1. Modeling of pyroclastic flows to predict pyroclastic hazard zone in Merapi volcano after 2010 eruption

    NASA Astrophysics Data System (ADS)

    Darmawan, Herlan; Wibowo, Totok; Suryanto, Wiwit; Setiawan, Muhammad

    2014-05-01

    Merapi eruption in 2010 was the tremendous phenomenon of natural disaster in Indonesia. The pyroclastic materials moved ~15 km from the summit of Merapi and destroyed many monitoring equipments. This emergency situation made the local government to evacuate more than 200.000 people who lived within 20 km distance from the summit of Merapi. The pyroclastic hazard map was not appropriate for this eruption scenario. It is because the map was just based on delineation of pyroclastic deposit in the previous eruption. Here, we purpose a method to predict the pyroclastic distribution in the future eruption based on mathematical approach. We used Titan2D software to produce the pyroclastic flow in 2010 eruption and the pyroclastic prediction after 2010 eruption. The method consists of parameterization, validation, and prediction. At least 39 models have been produced to obtain the best input parameters for 2010 eruption. Validation has been done by integrating between seismic refraction method and remote sensing interpretation. Seismic refraction method provides the information of pyroclastic deposit thickness, while remote sensing interpretation gives the information of pyroclastic distribution. The best model shows similarity with the reality. Analysis of bed friction parameter and build an eruption scenario are the most essential part in prediction. Analysis of bed friction was done by comparing the bed friction from 2006 eruption and 2010 eruption, while eruption scenario has been built by studying the historical eruption. The result shows that three villages are located in high pyroclastic hazardous area, six villages are located in moderate pyroclastic hazardous area, and three villages are located in low pyroclastic hazardous area. Those three villages are KepuhHarjo, GlagahHarjo, and Balerante. Therefore, the local government should concern for those three villages in the next eruption. This information can help the local government to make evacuation plan for the

  2. Data Model for Multi Hazard Risk Assessment Spatial Support Decision System

    NASA Astrophysics Data System (ADS)

    Andrejchenko, Vera; Bakker, Wim; van Westen, Cees

    2014-05-01

    The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The

  3. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  4. Hazard Ranking System and toxicological risk assessment models yield different results

    SciTech Connect

    Dehghani, T.; Sells, G. . CER-CLA Site Assessment Div.)

    1993-09-01

    A major goal of the Superfund Site Assessment program is identifying hazardous waste sites that pose unacceptable risks to human health and the environment. To accomplish this, EPA developed the Hazard Ranking System (HRS), a mathematical model used to assess the relative risks associated with actual or potential releases of hazardous wastes from a site. HRS is a scoring system based on factors grouped into three categories--likelihood of release, waste characteristics and targets. Values for the factor categories are multiplied, then normalized to 100 points to obtain a pathway score. Four pathways--groundwater, surface water, air migration and soil exposure--are evaluated and scored. The final HRS score is obtained by combining pathway scores using a root-mean-square method. HRS is intended to be a screening tool for measuring relative, rather than absolute, risk. The Superfund site assessment program usually requires at least two studies of a potential hazardous waste site before it is proposed for listing on the NPL. The initial study, or preliminary assessment (PA), is a limited-scope evaluation based on available historical information and data that can be gathered readily during a site reconnaissance.

  5. Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges

    2016-04-01

    The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model

  6. Accelerated Hazards Model based on Parametric Families Generalized with Bernstein Polynomials

    PubMed Central

    Chen, Yuhui; Hanson, Timothy; Zhang, Jiajia

    2015-01-01

    Summary A transformed Bernstein polynomial that is centered at standard parametric families, such as Weibull or log-logistic, is proposed for use in the accelerated hazards model. This class provides a convenient way towards creating a Bayesian non-parametric prior for smooth densities, blending the merits of parametric and non-parametric methods, that is amenable to standard estimation approaches. For example optimization methods in SAS or R can yield the posterior mode and asymptotic covariance matrix. This novel nonparametric prior is employed in the accelerated hazards model, which is further generalized to time-dependent covariates. The proposed approach fares considerably better than previous approaches in simulations; data on the effectiveness of biodegradable carmustine polymers on recurrent brain malignant gliomas is investigated. PMID:24261450

  7. Using the Averaging-Based Factorization to Assess CyberShake Hazard Models

    NASA Astrophysics Data System (ADS)

    Wang, F.; Jordan, T. H.; Callaghan, S.; Graves, R. W.; Olsen, K. B.; Maechling, P. J.

    2013-12-01

    The CyberShake project of Southern California Earthquake Center (SCEC) combines stochastic models of finite-fault ruptures with 3D ground motion simulations to compute seismic hazards at low frequencies (< 0.5 Hz) in Southern California. The first CyberShake hazard model (Graves et al., 2011) was based on the Graves & Pitarka (2004) rupture model (GP-04) and the Kohler et al. (2004) community velocity model (CVM-S). We have recently extended the CyberShake calculations to include the Graves & Pitarka (2010) rupture model (GP-10), which substantially increases the rupture complexity relative to GP-04, and the Shaw et al. (2011) community velocity model (CVM-H), which features different sedimentary basin structures than CVM-S. Here we apply the averaging-based factorization (ABF) technique of Wang & Jordan (2013) to compare CyberShake models and assess their consistency with the hazards predicted by the Next Generation Attenuation (NGA) models (Power et al., 2008). ABF uses a hierarchical averaging scheme to separate the shaking intensities for large ensembles of earthquakes into relative (dimensionless) excitation fields representing site, path, directivity, and source-complexity effects, and it provides quantitative, map-based comparisons between models with completely different formulations. The CyberShake directivity effects are generally larger than predicted by the Spudich & Chiou (2008) NGA directivity factor, but those calculated from the GP-10 sources are smaller than those of GP-04, owing to the greater incoherence of the wavefields from the more complex rupture models. Substituting GP-10 for GP-04 reduces the CyberShake-NGA directivity-effect discrepancy by a factor of two, from +36% to +18%. The CyberShake basin effects are generally larger than those from the three NGA models that provide basin-effect factors. However, the basin excitations calculated from CVM-H are smaller than from CVM-S, and they show a stronger frequency dependence, primarily because

  8. Forward induced seismic hazard assessment: application to a synthetic seismicity catalogue from hydraulic stimulation modelling

    NASA Astrophysics Data System (ADS)

    Hakimhashemi, Amir Hossein; Yoon, Jeoung Seok; Heidbach, Oliver; Zang, Arno; Grünthal, Gottfried

    2014-07-01

    The M w 3.2-induced seismic event in 2006 due to fluid injection at the Basel geothermal site in Switzerland was the starting point for an ongoing discussion in Europe on the potential risk of hydraulic stimulation in general. In particular, further development of mitigation strategies of induced seismic events of economic concern became a hot topic in geosciences and geoengineering. Here, we present a workflow to assess the hazard of induced seismicity in terms of occurrence rate of induced seismic events. The workflow is called Forward Induced Seismic Hazard Assessment (FISHA) as it combines the results of forward hydromechanical-numerical models with methods of time-dependent probabilistic seismic hazard assessment. To exemplify FISHA, we use simulations of four different fluid injection types with various injection parameters, i.e. injection rate, duration and style of injection. The hydromechanical-numerical model applied in this study represents a geothermal reservoir with preexisting fractures where a routine of viscous fluid flow in porous media is implemented from which flow and pressure driven failures of rock matrix and preexisting fractures are simulated, and corresponding seismic moment magnitudes are computed. The resulting synthetic catalogues of induced seismicity, including event location, occurrence time and magnitude, are used to calibrate the magnitude completeness M c and the parameters a and b of the frequency-magnitude relation. These are used to estimate the time-dependent occurrence rate of induced seismic events for each fluid injection scenario. In contrast to other mitigation strategies that rely on real-time data or already obtained catalogues, we can perform various synthetic experiments with the same initial conditions. Thus, the advantage of FISHA is that it can quantify hazard from numerical experiments and recommend a priori a stimulation type that lowers the occurrence rate of induced seismic events. The FISHA workflow is rather

  9. A new approach for deriving Flood hazard maps from SAR data and global hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Matgen, P.; Hostache, R.; Chini, M.; Giustarini, L.; Pappenberger, F.; Bally, P.

    2014-12-01

    With the flood consequences likely to amplify because of the growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method that integrates global flood inundation modeling and microwave remote sensing is presented. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers opportunities for estimating flood non-exceedance probabilities in a robust way. These probabilities can be attributed to historical satellite observations. Time series of SAR-derived flood extent maps and associated non-exceedance probabilities can then be combined generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. In principle, this can be done for any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. As a test case we applied the method on the Severn River (UK) and the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. The first results confirm the potentiality of the method. However, further developments on two aspects are required to improve the quality of the hazard map and to ensure the acceptability of the product by potential end user organizations. On the one hand, it is of paramount importance to

  10. Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection

    NASA Technical Reports Server (NTRS)

    Messing, B. S.

    1980-01-01

    A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.

  11. Benchmarking Computational Fluid Dynamics Models for Application to Lava Flow Simulations and Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.

    2015-12-01

    Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.

  12. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    NASA Astrophysics Data System (ADS)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  13. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    PubMed

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk. PMID:19087232

  14. Perspectives on open access high resolution digital elevation models to produce global flood hazard layers

    NASA Astrophysics Data System (ADS)

    Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark

    2015-12-01

    Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.

  15. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    USGS Publications Warehouse

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  16. Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise

    NASA Astrophysics Data System (ADS)

    Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.

    2014-12-01

    A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.

  17. Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models

    NASA Astrophysics Data System (ADS)

    Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria

    2011-08-01

    The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.

  18. Seismic source models for probabilistic hazard analysis of Georgia (Southern Caucasus)

    NASA Astrophysics Data System (ADS)

    Javakhishvili, Z.; Godoladze, T.; Gamkrelidze, E.; Sokhadze, G.

    2014-12-01

    Seismic Source model is one of the main components of probabilistic seismic-hazard analysis. Active faults and tectonics of Georgia (Sothern Caucasus) have been investigated in numerous scientific studies. The Caucasus consists of different geological structures with complex interactions. The major structures trend WNW-ESE, and focal mechanisms indicate primarily thrust faults striking parallel to the mountains. It is a part of the Alpine - Himalayan collision belt and it is well known for its high seismicity. Although the geodynamic activity of the region, caused by the convergence of the Arabian and the Eurasian plates at a rate of several cm/year, is well known, different tectonic models were proposed as an explanation for the seismic process in the region. The recent model on seismic sources for the Caucasus and derives from recent seismotectonic studies performed in Georgia in the framework of different international projects.We have analyzed previous studies and recent investigations on the bases of new seismic (spatial distribution, moment tensor solution etc), GPS and other data. As a result data base of seismic source models was compiled. Seismic sources are modeled as lines representing the surface projection of active faults or as wide areas (source zones), where the earthquakes can occur randomly. Each structure or zone was quantified on the basis of different parameters. Recent experience for harmonization of cross-border structures was used. As a result new seismic source model of Georgia (Southern Caucasus) for hazard analysis was created.

  19. Nowcast model for hazardous material spill prevention and response, San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, Ralph T.; Wilmot, Wayne L.; Galt, Jerry A.

    1997-01-01

    The National Oceanic and Atmospheric Administration (NOAA) installed the Physical Oceanographic Real-time System (PORTS) in San Francisco Bay, California, to provide real-time observations of tides, tidal currents, and meteorological conditions to, among other purposes, guide hazardous material spill prevention and response. Integrated with nowcast modeling techniques and dissemination of real-time data and the nowcasting results through the Internet on the World Wide Web, emerging technologies used in PORTS for real-time data collection forms a nowcast modeling system. Users can download tides and tidal current distribution in San Francisco Bay for their specific applications and/or for further analysis.

  20. NASA/MSFC multilayer diffusion models and computer program for operational prediction of toxic fuel hazards

    NASA Technical Reports Server (NTRS)

    Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.

    1973-01-01

    The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.

  1. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    SciTech Connect

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  2. Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California

    USGS Publications Warehouse

    Pike, Richard J.; Graymer, Russell W.

    2008-01-01

    With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven

  3. Development Of An Open System For Integration Of Heterogeneous Models For Flood Forecasting And Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Chang, W.; Tsai, W.; Lin, F.; Lin, S.; Lien, H.; Chung, T.; Huang, L.; Lee, K.; Chang, C.

    2008-12-01

    During a typhoon or a heavy storm event, using various forecasting models to predict rainfall intensity, and water level variation in rivers and flood situation in the urban area is able to reveal its capability technically. However, in practice, the following two causes tend to restrain the further application of these models as a decision support system (DSS) for the hazard mitigation. The first one is due to the difficulty of integration of heterogeneous models. One has to take into consideration the different using format of models, such as input files, output files, computational requirements, and so on. The second one is that the development of DSS requires, due to the heterogeneity of models and systems, a friendly user interface or platform to hide the complexity of various tools from users. It is expected that users can be governmental officials rather than professional experts, therefore the complicated interface of DSS is not acceptable. Based on the above considerations, in the present study, we develop an open system for integration of several simulation models for flood forecasting by adopting the FEWS (Flood Early Warning System) platform developed by WL | Delft Hydraulics. It allows us to link heterogeneous models effectively and provides suitable display modules. In addition, FEWS also has been adopted by Water Resource Agency (WRA), Taiwan as the standard operational system for river flooding management. That means this work can be much easily integrated with the use of practical cases. In the present study, based on FEWS platform, the basin rainfall-runoff model, SOBEK channel-routing model, and estuary tide forecasting model are linked and integrated through the physical connection of model initial and boundary definitions. The work flow of the integrated processes of models is shown in Fig. 1. This differs from the typical single model linking used in FEWS, which only aims at data exchange but without much physical consideration. So it really

  4. The Role of Sister Cities’ Staff Exchanges in Developing “Learning Cities”: Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling

    PubMed Central

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-01-01

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245

  5. The Role of Sister Cities' Staff Exchanges in Developing "Learning Cities": Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling.

    PubMed

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-07-01

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building "learning cities" through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245

  6. Visual Manipulatives for Proportional Reasoning.

    ERIC Educational Resources Information Center

    Moore, Joyce L.; Schwartz, Daniel L.

    The use of a visual representation in learning about proportional relations was studied, examining students' understandings of the invariance of a multiplicative relation on both sides of a proportion equation and the invariance of the structural relations that exist in different semantic types of proportion problems. Subjects were 49 high-ability…

  7. An approach for modeling thermal destruction of hazardous wastes in circulating fluidized bed incinerator.

    PubMed

    Patil, M P; Sonolikar, R L

    2008-10-01

    This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner. PMID:19697764

  8. A spatio-temporal model for probabilistic seismic hazard zonation of Tehran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2013-08-01

    A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.

  9. Monitoring and forecast of hydro meteorological hazards basing on data of distant assay and mathematical modeling

    NASA Astrophysics Data System (ADS)

    Sapunov, Valentin; Dikinis, Alexandr; Voronov, Nikolai

    2014-05-01

    Russian Federation having giant area has low concentration of land meteorological check points. Net of monitoring is not enough for effective forecast and prediction of weather dynamics and extremely situations. Under increase of extremely situations and incidents - hurricanes et al (two times from begin of XXI century) reconstruction and "perestroika" of monitoring net is needful and necessary. The basis of such a progress is distant monitoring using planes and satellites adding land contact monitoring base on efforts of existed points and stations. Interaction of contact and distant views may make hydro meteorological data and prediction more fine and significant. Tradition physical methods must be added by new biological methods of modern study. According to gotten researches animal are able to predict extremely hazards of natural and anthropogenic nature basing of interaction between biological matter and probable physical field that is under primary study. For example it was animals which forecasted dropping of Chelyabinsk meteorite of 2013. Adding of biological indication with complex of meteorological data may increase significance of hazard prediction. The uniting of all data and approaches may become basis of proposed mathematical hydro meteorological weather models. Introduction to practice reported complex methods may decrease of loss from hydro meteorological risks and hazards and increase stability of country economics.

  10. Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function.

    PubMed

    Zheng, Wenjing; Petersen, Maya; van der Laan, Mark J

    2016-05-01

    In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997. Marginal structural models Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1-10), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i. e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation Robins (1999. Marginal structural models versus structural nested models as tools for causal inference. In: Statistical models in epidemiology: the environment and clinical trials. Springer-Verlag, 1999:95-134), Robins et al. (2000), (IPTW, van der Laan and Petersen (2007. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat 2007;3:Article 3), Robins et al. (2008. Estimaton and extrapolation of optimal treatment and testing strategies. Statistics in Medicine 2008;27(23):4678-721)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood