Proportional Hazards Models of Graduation
ERIC Educational Resources Information Center
Chimka, Justin R.; Reed-Rhoads, Teri; Barker, Kash
2008-01-01
Survival analysis is a statistical tool used to describe the duration between events. Many processes in medical research, engineering, and economics can be described using survival analysis techniques. This research involves studying engineering college student graduation using Cox proportional hazards models. Among male students with American…
Tree-augmented Cox proportional hazards models.
Su, Xiaogang; Tsai, Chih-Ling
2005-07-01
We study a hybrid model that combines Cox proportional hazards regression with tree-structured modeling. The main idea is to use step functions, provided by a tree structure, to 'augment' Cox (1972) proportional hazards models. The proposed model not only provides a natural assessment of the adequacy of the Cox proportional hazards model but also improves its model fitting without loss of interpretability. Both simulations and an empirical example are provided to illustrate the use of the proposed method.
Dynamic reliability models with conditional proportional hazards.
Hollander, M; Peña, E A
1995-01-01
A dynamic approach to the stochastic modelling of reliability systems is further explored. This modelling approach is particularly appropriate for load-sharing, software reliability, and multivariate failure-time models, where component failure characteristics are affected by their degree of use, amount of load, or extent of stresses experienced. This approach incorporates the intuitive notion that when a set of components in a coherent system fail at a certain time, there is a 'jump' from one structure function to another which governs the residual lifetimes of the remaining functioning components, and since the component lifetimes are intrinsically affected by the structure function which they constitute, then at such a failure time there should also be a jump in the stochastic structure of the lifetimes of the remaining components. For such dynamically-modelled systems, the stochastic characteristics of their jump times are studied. These properties of the jump times allow us to obtain the properties of the lifetime of the system. In particular, for a Markov dynamic model, specific expressions for the exact distribution function of the jump times are obtained for a general coherent system, a parallel system, and a series-parallel system. We derive a new family of distribution functions which describes the distributions of the jump times for a dynamically-modelled system.
A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2016-01-01
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
A Bayesian Semiparametric Temporally–Stratified Proportional Hazards Model with Spatial Frailties
Hanson, Timothy E.; Jara, Alejandro; Zhao, Luping
2011-01-01
Incorporating temporal and spatial variation could potentially enhance information gathered from survival data. This paper proposes a Bayesian semiparametric model for capturing spatio–temporal heterogeneity within the proportional hazards framework. The spatial correlation is introduced in the form of county–level frailties. The temporal effect is introduced by considering the stratification of the proportional hazards model, where the time–dependent hazards are indirectly modeled using a probability model for related probability distributions. With this aim, an autoregressive dependent tailfree process is introduced. The full Kullback–Leibler support of the proposed process is provided. The approach is illustrated using simulated and data from the Surveillance Epidemiology and End Results database of the National Cancer Institute on patients in Iowa diagnosed with breast cancer. PMID:22247752
ERIC Educational Resources Information Center
Rasmussen, Andrew
2004-01-01
This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…
Proportional hazards model for competing risks data with missing cause of failure
Hyun, Seunggeun; Sun, Yanqing
2012-01-01
We consider the semiparametric proportional hazards model for the cause-specific hazard function in analysis of competing risks data with missing cause of failure. The inverse probability weighted equation and augmented inverse probability weighted equation are proposed for estimating the regression parameters in the model, and their theoretical properties are established for inference. Simulation studies demonstrate that the augmented inverse probability weighted estimator is doubly robust and the proposed method is appropriate for practical use. The simulations also compare the proposed estimators with the multiple imputation estimator of Lu and Tsiatis (2001). The application of the proposed method is illustrated using data from a bone marrow transplant study. PMID:22468017
Multi-parameter regression survival modeling: An alternative to proportional hazards.
Burke, K; MacKenzie, G
2016-11-28
It is standard practice for covariates to enter a parametric model through a single distributional parameter of interest, for example, the scale parameter in many standard survival models. Indeed, the well-known proportional hazards model is of this kind. In this article, we discuss a more general approach whereby covariates enter the model through more than one distributional parameter simultaneously (e.g., scale and shape parameters). We refer to this practice as "multi-parameter regression" (MPR) modeling and explore its use in a survival analysis context. We find that multi-parameter regression leads to more flexible models which can offer greater insight into the underlying data generating process. To illustrate the concept, we consider the two-parameter Weibull model which leads to time-dependent hazard ratios, thus relaxing the typical proportional hazards assumption and motivating a new test of proportionality. A novel variable selection strategy is introduced for such multi-parameter regression models. It accounts for the correlation arising between the estimated regression coefficients in two or more linear predictors-a feature which has not been considered by other authors in similar settings. The methods discussed have been implemented in the mpr package in R.
Comparing proportional hazards and accelerated failure time models for survival analysis.
Orbe, Jesus; Ferreira, Eva; Núñez-Antón, Vicente
2002-11-30
This paper describes a method proposed for a censored linear regression model that can be used in the context of survival analysis. The method has the important characteristic of allowing estimation and inference without knowing the distribution of the duration variable. Moreover, it does not need the assumption of proportional hazards. Therefore, it can be an interesting alternative to the Cox proportional hazards models when this assumption does not hold. In addition, implementation and interpretation of the results is simple. In order to analyse the performance of this methodology, we apply it to two real examples and we carry out a simulation study. We present its results together with those obtained with the traditional Cox model and AFT parametric models. The new proposal seems to lead to more precise results.
Using the proportional hazards model to study heart valve replacement data.
Bunday, B D; Kiri, V A; Stoodley, K D
1992-01-01
The proportional hazards model is used to study the effect of various concomitant variables on the time to valve failure, mortality, or other complications, for patients who have had artificial heart valves inserted. The data are from a database, which is still being assembled as more information is acquired, at Killingbeck Hospital. A suite of computer programs, not specifically developed with this application in mind, has been used to carry out the exploratory data analysis, the estimation of parameters and the validation of the model. These three elements of the analysis are all illustrated. The present report is seen as a preliminary study to assess the usefulness of the proportional hazards model in this area. Follow-up work as more data are accumulated is intended.
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny
2003-01-01
In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.
NASA Astrophysics Data System (ADS)
Zhang, Qing; Hua, Cheng; Xu, Guanghua
2014-02-01
As mechanical systems increase in complexity, it is becoming more and more common to observe multiple failure modes. The system failure can be regarded as the result of interaction and competition between different failure modes. It is therefore necessary to combine multiple failure modes when analysing the failure of an overall system. In this paper, a mixture Weibull proportional hazard model (MWPHM) is proposed to predict the failure of a mechanical system with multiple failure modes. The mixed model parameters are estimated by combining historical lifetime and monitoring data of all failure modes. In addition, the system failure probability density is obtained by proportionally mixing the failure probability density of multiple failure modes. Monitoring data are input into the MWPHM to estimate the system reliability and predict the system failure time. A simulated sample set is used to verify the ability of the MWPHM to model multiple failure modes. Finally, the MWPHM and the traditional Weibull proportional hazard model (WPHM) are applied to a high-pressure water descaling pump, which has two failure modes: sealing ring wear and thrust bearing damage. Results show that the MWPHM is greatly superior in system failure prediction to the WPHM.
[Clinical research XXII. From clinical judgment to Cox proportional hazards model].
Pérez-Rodríguez, Marcela; Rivas-Ruiz, Rodolfo; Palacios-Cruz, Lino; Talavera, Juan O
2014-01-01
Survival analyses are commonly used to determine the time of an event (for example, death). However, they can be used also for other clinical outcomes on the condition that these are dichotomous, for example healing time. These analyses only consider the relationship of one variable. However, Cox proportional hazards model is a multivariate analysis of the survival analysis, in which other potentially confounding covariates of the effect of the main maneuver studied, such as age, gender or disease stage, are taken into account. This analysis can include both quantitative and qualitative variables in the model. The measure of association used is called hazard ratio (HR) or relative risk ratio, which is not the same as the relative risk or odds ratio (OR). The difference is that the HR refers to the possibility that one of the groups develops the event before it is compared with the other group. The proportional hazards multivariate model of Cox is the most widely used in medicine when the phenomenon is studied in two dimensions: time and event.
Westreich, Daniel; Cole, Stephen R; Schisterman, Enrique F; Platt, Robert W
2012-08-30
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
Estimation of Stratified Mark-Specific Proportional Hazards Models with Missing Marks
Sun, Yanqing; Gilbert, Peter B.
2013-01-01
An objective of randomized placebo-controlled preventive HIV vaccine efficacy trials is to assess the relationship between the vaccine effect to prevent infection and the genetic distance of the exposing HIV to the HIV strain represented in the vaccine construct. Motivated by this objective, recently a mark-specific proportional hazards model with a continuum of competing risks has been studied, where the genetic distance of the transmitting strain is the continuous `mark' defined and observable only in failures. A high percentage of genetic marks of interest may be missing for a variety of reasons, predominantly due to rapid evolution of HIV sequences after transmission before a blood sample is drawn from which HIV sequences are measured. This research investigates the stratified mark-specific proportional hazards model with missing marks where the baseline functions may vary with strata. We develop two consistent estimation approaches, the first based on the inverse probability weighted complete-case (IPW) technique, and the second based on augmenting the IPW estimator by incorporating auxiliary information predictive of the mark. We investigate the asymptotic properties and finite-sample performance of the two estimators, and show that the augmented IPW estimator, which satisfies a double robustness property, is more efficient. PMID:23519918
[[Proportional hazards model analysis of women's reproductive career in present-day Japan
Otani, K
1989-01-01
The 1st section of this paper, using a method to decompose a change in the total marital fertility rate into the quantum and tempo effects, confirmed the major effect of birth timing on the 1970s trends in the total marital fertility rate based on the pooled data of the 8th and 9th Japanese National Fertility Surveys of 1982 and 1987, respectively. In the next section, proportional hazards model analyses of the 1st, 2nd, and 3rd birth functions for the pooled data made it clear that the 1st and 2nd birth intervals among the marriage cohorts since the end of the 1960s were shorter than those in the early 1960s even after having controlled for other variables. Given that 1 conception or more can occur during a birth interval, proportional hazards model analyses were again utilized to examine the effects of various variables on the time elapsed since marriage and the 1st conception, the time between the end of the 1st pregnancy and the 2nd conception, and the time between the end of the 2nd pregnancy and the 3rd conception. The authors found that the 1st-conception probability of the marriage cohorts since the late 1960s was smaller than that of predecessors, while the 2nd-conception probability was not affected by marriage cohort. In the last section, a logistic regression analysis of the probability of induced abortion showed that the probability of aborting a 2nd pregnancy decreased in the 1970s compared with that in the early 1960s. When a proportional hazards model analysis of the 2nd birth function was applied after omitting those cases where the 1st pregnancy did not result in birth and/or the 2nd pregnancy was aborted, the strong effect of marriage cohort on the 2nd birth probability was substantially diluted. These facts suggest that the shortened 2nd birth interval in the 1970s was partly caused by the shrinking probability of aborting a 2nd pregnancy in this period.
Extension of a Cox proportional hazards cure model when cure information is partially known
Wu, Yu; Lin, Yong; Lu, Shou-En; Li, Chin-Shang; Shih, Weichung Joe
2014-01-01
When there is evidence of long-term survivors, cure models are often used to model the survival curve. A cure model is a mixture model consisting of a cured fraction and an uncured fraction. Traditional cure models assume that the cured or uncured status in the censored set cannot be distinguished. But in many practices, some diagnostic procedures may provide partial information about the cured or uncured status relative to certain sensitivity and specificity. The traditional cure model does not take advantage of this additional information. Motivated by a clinical study on bone injury in pediatric patients, we propose a novel extension of a traditional Cox proportional hazards (PH) cure model that incorporates the additional information about the cured status. This extension can be applied when the latency part of the cure model is modeled by the Cox PH model. Extensive simulations demonstrated that the proposed extension provides more efficient and less biased estimations, and the higher efficiency and smaller bias is associated with higher sensitivity and specificity of diagnostic procedures. When the proposed extended Cox PH cure model was applied to the motivating example, there was a substantial improvement in the estimation. PMID:24511081
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Sparse Estimation of Cox Proportional Hazards Models via Approximated Information Criteria
Fan, Juanjuan; Zhang, Ying
2016-01-01
Summary We propose a new sparse estimation method for Cox (1972) proportional hazards models by optimizing an approximated information criterion. The main idea involves approximation of the ℓ0 norm with a continuous or smooth unit dent function. The proposed method bridges the best subset selection and regularisation by borrowing strength from both. It mimics the best subset selection using a penalised likelihood approach yet with no need of a tuning parameter. We further reformulate the problem with a reparameterisation step so that it reduces to one unconstrained nonconvex yet smooth programming problem, which can be solved efficiently as in computing the maximum partial likelihood estimator (MPLE). Furthermore, the reparameterisation tactic yields an additional advantage in terms of circumventing post-selection inference. The oracle property of the proposed method is established. Both simulated experiments and empirical examples are provided for assessment and illustration. PMID:26873398
Xiong, Xiaoping; Wu, Jianrong
2017-01-01
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models.
Gayat, Etienne; Resche-Rigon, Matthieu; Mary, Jean-Yves; Porcher, Raphaël
2012-01-01
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity-score-matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity-score-matched data, using a robust estimator of the variance.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate.
Katsahian, Sandrine; Resche-Rigon, Matthieu; Chevret, Sylvie; Porcher, Raphaël
2006-12-30
In the competing-risks setting, to test the effect of a covariate on the probability of one particular cause of failure, the Fine and Gray model for the subdistribution hazard can be used. However, sometimes, competing risks data cannot be considered as independent because of a clustered design, for instance in registry cohorts or multicentre clinical trials. Frailty models have been shown useful to analyse such clustered data in a classical survival setting, where only one risk acts on the population. Inclusion of random effects in the subdistribution hazard has not been assessed yet. In this work, we propose a frailty model for the subdistribution hazard. This allows first to assess the heterogeneity across clusters, then to incorporate such an effect when testing the effect of a covariate of interest. Based on simulation study, the effect of the presence of heterogeneity on testing for covariate effects was studied. Finally, the model was illustrated on a data set from a registry cohort of patients with acute myeloid leukaemia who underwent bone marrow transplantation.
Testing Goodness-of-Fit for the Proportional Hazards Model based on Nested Case-Control Data
Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau
2014-01-01
Summary Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this paper, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen’s maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms’ Tumor Study is given to illustrate the methodology. PMID:25298193
Goodness-of-fit test of the stratified mark-specific proportional hazards model with continuous mark
Sun, Yanqing; Li, Mei; Gilbert, Peter B.
2014-01-01
Motivated by the need to assess HIV vaccine efficacy, previous studies proposed an extension of the discrete competing risks proportional hazards model, in which the cause of failure is replaced by a continuous mark only observed at the failure time. However the model assumptions may fail in several ways, and no diagnostic testing procedure for this situation has been proposed. A goodness-of-fit test procedure for the stratified mark-specific proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazards depends nonparametrically on both time and the mark is proposed. The test statistics are constructed based on the weighted cumulative mark-specific martingale residuals. The critical values of the proposed test statistics are approximated using the Gaussian multiplier method. The performance of the proposed tests are examined extensively in simulations for a variety of the models under the null hypothesis and under different types of alternative models. An analysis of the ‘Step’ HIV vaccine efficacy trial using the proposed method is presented. The analysis suggests that the HIV vaccine candidate may increase susceptibility to HIV acquisition. PMID:26461462
Sun, Yanqing; Li, Mei; Gilbert, Peter B.
2013-01-01
For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates (Prentice and others, 1978. The analysis of failure time in the presence of competing risks. Biometrics 34, 541–554). Inspired by previous research in HIV vaccine efficacy trials, the cause of failure is replaced by a continuous mark observed only in subjects who fail. This article studies an extension of this approach to allow a multivariate continuum of competing risks, to better account for the fact that the candidate HIV vaccines tested in efficacy trials have contained multiple HIV sequences, with a purpose to elicit multiple types of immune response that recognize and block different types of HIV viruses. We develop inference for the proportional hazards model in which the regression parameters depend parametrically on the marks, to avoid the curse of dimensionality, and the baseline hazard depends nonparametrically on both time and marks. Goodness-of-fit tests are constructed based on generalized weighted martingale residuals. The finite-sample performance of the proposed methods is examined through extensive simulations. The methods are applied to a vaccine efficacy trial to examine whether and how certain antigens represented inside the vaccine are relevant for protection or anti-protection against the exposing HIVs. PMID:22764174
HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE
2015-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534
Sasaki, Osamu; Aihara, Mitsuo; Hagiya, Koichi; Nishiura, Akiko; Ishii, Kazuo; Satoh, Masahiro
2012-02-01
The objective of this study was to confirm the stability of the genetic estimation of longevity of the Holstein population in Japan. Data on the first 10 lactation periods were obtained from the Livestock Improvement Association of Japan. Longevity was defined as the number of days from first calving until culling or censoring. DATA1 and DATA2 included the survival records for the periods 1991-2003 and 1991-2005, respectively. The proportional hazard model included the effects of the region-parity-lactation stage-milk yield class, age at first calving, the herd-year-season, and sire. The heritabilities on an original scale of DATA1 and DATA2 were 0.119 and 0.123, respectively. The estimated transmitting abilities (ETAs) of young sires in DATA1 may have been underestimated, but coefficient δ, which indicated the bias of genetic trend between DATA1 and DATA2, was not significant. The regression coefficient of ETAs between DATA1 and DATA2 was very close to 1. The proportional hazard model could steadily estimate the ETA for longevity of the sires in Japan.
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.
Li, Shuli; Gray, Robert J
2016-09-01
We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups.
Imbayarwo-Chikosi, V E; Ducrocq, V; Banga, C B; Halimani, T E; van Wyk, J B; Maiwashe, A; Dzama, K
2017-03-14
Non-genetic factors influencing functional longevity and the heritability of the trait were estimated in South African Holsteins using a piecewise Weibull proportional hazards model. Data consisted of records of 161,222 of daughters of 2,051 sires calving between 1995 and 2013. The reference model included fixed time-independent age at first calving and time-dependent interactions involving lactation number, region, season and age of calving, within-herd class of milk production, fat and protein content, class of annual variation in herd size and the random herd-year effect. Random sire and maternal grandsire effects were added to the model to estimate genetic parameters. The within-lactation Weibull baseline hazards were assumed to change at 0, 270, 380 days and at drying date. Within-herd milk production class had the largest contribution to the relative risk of culling. Relative culling risk increased with lower protein and fat per cent production classes and late age at first calving. Cows in large shrinking herds also had high relative risk of culling. The estimate of the sire genetic variance was 0.0472 ± 0.0017 giving a theoretical heritability estimate of 0.11 in the complete absence of censoring. Genetic trends indicated an overall decrease in functional longevity of 0.014 standard deviation from 1995 to 2007. There are opportunities for including the trait in the breeding objective for South African Holstein cattle.
Casellas, J
2016-03-01
Age at first lambing (AFL) plays a key role on the reproductive performance of sheep flocks, although there are no genetic selection programs accounting for this trait in the sheep industry. This could be due to the non-Gaussian distribution pattern of AFL data, which must be properly accounted for by the analytical model. In this manuscript, two different parameterizations were implemented to analyze AFL in the Ripollesa sheep breed, that is, the skew-Gaussian mixed linear model (sGML) and the piecewise Weibull proportional hazards model (PWPH). Data were available from 10 235 ewes born between 1972 and 2013 in 14 purebred Ripollesa flocks located in the north-east region of Spain. On average, ewes gave their first lambing short after their first year and a half of life (590.9 days), and within-flock averages ranged between 523.4 days and 696.6 days. Model fit was compared using the deviance information criterion (DIC; the smaller the DIC statistic, the better the model fit). Model sGML was clearly penalized (DIC=200 059), whereas model PWPH provided smaller estimates and reached the minimum DIC when one cut point was added to the initial Weibull model (DIC=132 545). The pure Weibull baseline and parameterizations with two or more cut points were discarded due to larger DIC estimates (>134 200). The only systematic effect influencing AFL was the season of birth, where summer- and fall-born ewes showed a remarkable shortening of their AFL, whereas neither birth type nor birth weight had a relevant impact on this reproductive trait. On the other hand, heritability on the original scale derived from model PWPH was high, with a model estimate place at 0.114 and its highest posterior density region ranging from 0.079 and 0.143. As conclusion, Gaussian-related mixed linear models should be avoided when analyzing AFL, whereas model PWPH must be viewed as better alternative with superior goodness of fit; moreover, the additive genetic background underlying this
Gilbert, Peter B.; Sun, Yanqing
2014-01-01
This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model in the presence of missing marks. The motivating application is preventive HIV vaccine efficacy trials, where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. The test statistics are constructed based on two-stage efficient estimators, which utilize auxiliary predictors of the missing marks. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The methods are applied to the RV144 vaccine trial. PMID:25641990
Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita
2016-05-15
Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes.
Unger, J B; Chen, X
1999-01-01
The increasing prevalence of adolescent smoking demonstrates the need to identify factors associated with early smoking initiation. Previous studies have shown that smoking by social network members and receptivity to pro-tobacco marketing are associated with smoking among adolescents. It is not clear, however, whether these variables also are associated with the age of smoking initiation. Using data from 10,030 California adolescents, this study identified significant correlates of age of smoking initiation using bivariate methods and a multivariate proportional hazards model. Age of smoking initiation was earlier among those adolescents whose friends, siblings, or parents were smokers, and among those adolescents who had a favorite tobacco advertisement, had received tobacco promotional items, or would be willing to use tobacco promotional items. Results suggest that the smoking behavior of social network members and pro-tobacco media influences are important determinants of age of smoking initiation. Because early smoking initiation is associated with higher levels of addiction in adulthood, tobacco control programs should attempt to counter these influences.
Ethnicity, education, and the non-proportional hazard of first marriage in Turkey.
Gore, DeAnna L; Carlson, Elwood
2010-07-01
This study uses the 1998 Turkish Demographic and Health Survey to estimate non-proportional piecewise-constant hazards for first marriage among women in Turkey by education and ethnicity, with controls for region of residence and rural-urban migration. At low education levels Kurdish speakers married earlier than women who spoke Turkish or other languages, but at high education levels Kurdish women delayed marriage more than other women. This reversal across education groups furnishes a new illustration of the minority-group-status hypothesis specifically focused on marriage as the first step in the family formation process. The ethnic contrast concerned only marriage timing in Turkey, not proportions ever marrying. Eventual marriage remained nearly universal for all groups of women. This means that an assumption of proportional duration hazards (widespread in contemporary research) across the whole range of marriage-forming ages should be replaced by models with non-proportional duration hazards.
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
PSHREG: a SAS macro for proportional and nonproportional subdistribution hazards regression.
Kohl, Maria; Plischke, Max; Leffondré, Karen; Heinze, Georg
2015-02-01
We present a new SAS macro %pshreg that can be used to fit a proportional subdistribution hazards model for survival data subject to competing risks. Our macro first modifies the input data set appropriately and then applies SAS's standard Cox regression procedure, PROC PHREG, using weights and counting-process style of specifying survival times to the modified data set. The modified data set can also be used to estimate cumulative incidence curves for the event of interest. The application of PROC PHREG has several advantages, e.g., it directly enables the user to apply the Firth correction, which has been proposed as a solution to the problem of undefined (infinite) maximum likelihood estimates in Cox regression, frequently encountered in small sample analyses. Deviation from proportional subdistribution hazards can be detected by both inspecting Schoenfeld-type residuals and testing correlation of these residuals with time, or by including interactions of covariates with functions of time. We illustrate application of these extended methods for competing risk regression using our macro, which is freely available at: http://cemsiis.meduniwien.ac.at/en/kb/science-research/software/statistical-software/pshreg, by means of analysis of a real chronic kidney disease study. We discuss differences in features and capabilities of %pshreg and the recent (January 2014) SAS PROC PHREG implementation of proportional subdistribution hazards modelling.
Lachin, John M.
2013-01-01
Summary General expressions are described for the evaluation of sample size and power for the K group Mantel-logrank test or the Cox PH model score test. Under an exponential model, the method of Lachin and Foulkes [1] for the 2 group case is extended to the K ≥ 2 group case using the non-centrality parameter of the K – 1 df chi-square test. Similar results are also shown to apply to the K group score test in a Cox PH model. Lachin and Foulkes [1] employed a truncated exponential distribution to provide for a non-linear rate of enrollment. Expressions for the mean time of enrollment and the expected follow-up time in the presence of exponential losses-to-follow-up are presented. When used with the expression for the non-centrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with R years of recruitment and T years total duration. Sample size and power are also described for a stratified-adjusted K group test and for the assessment of a group by stratum interaction. Similarly computations are described for a stratified-adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. PMID:23670965
Crossing Hazard Functions in Common Survival Models.
Zhang, Jiajia; Peng, Yingwei
2009-10-15
Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.
Crager, Michael R; Tang, Gong
We propose a method for assessing an individual patient's risk of a future clinical event using clinical trial or cohort data and Cox proportional hazards regression, combining the information from several studies using meta-analysis techniques. The method combines patient-specific estimates of the log cumulative hazard across studies, weighting by the relative precision of the estimates, using either fixed- or random-effects meta-analysis calculations. Risk assessment can be done for any future patient using a few key summary statistics determined once and for all from each study. Generalizations of the method to logistic regression and linear models are immediate. We evaluate the methods using simulation studies and illustrate their application using real data.
Estimating Regression Parameters in an Extended Proportional Odds Model
Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping
2012-01-01
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583
Progress in studying scintillator proportionality: Phenomenological model
Bizarri, Gregory; Cherepy, Nerine; Choong, Woon-Seng; Hull, Giulia; Moses, William; Payne, Sephen; Singh, Jai; Valentine, John; Vasilev, Andrey; Williams, Richard
2009-04-30
We present a model to describe the origin of non-proportional dependence of scintillator light yield on the energy of an ionizing particle. The non-proportionality is discussed in terms of energy relaxation channels and their linear and non-linear dependences on the deposited energy. In this approach, the scintillation response is described as a function of the deposited energy deposition and the kinetic rates of each relaxation channel. This mathematical framework allows both a qualitative interpretation and a quantitative fitting representation of scintillation non-proportionality response as function of kinetic rates. This method was successfully applied to thallium doped sodium iodide measured with SLYNCI, a new facility using the Compton coincidence technique. Finally, attention is given to the physical meaning of the dominant relaxation channels, and to the potential causes responsible for the scintillation non-proportionality. We find that thallium doped sodium iodide behaves as if non-proportionality is due to competition between radiative recombinations and non-radiative Auger processes.
NASA CONNECT: Proportionality: Modeling the Future
NASA Technical Reports Server (NTRS)
2000-01-01
'Proportionality: Modeling the Future' is the sixth of seven programs in the 1999-2000 NASA CONNECT series. Produced by NASA Langley Research Center's Office of Education, NASA CONNECT is an award-winning series of instructional programs designed to enhance the teaching of math, science and technology concepts in grades 5-8. NASA CONNECT establishes the 'connection' between the mathematics, science, and technology concepts taught in the classroom and NASA research. Each program in the series supports the national mathematics, science, and technology standards; includes a resource-rich teacher guide; and uses a classroom experiment and web-based activity to complement and enhance the math, science, and technology concepts presented in the program. NASA CONNECT is FREE and the programs in the series are in the public domain. Visit our web site and register. http://connect.larc.nasa.gov 'Proportionality: Modeling the Future', students will examine how patterns, measurement, ratios, and proportions are used in the research, development, and production of airplanes.
Boron-10 Lined Proportional Counter Model Validation
Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.
2012-06-30
The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.
Mathematically modelling proportions of Japanese populations by industry
NASA Astrophysics Data System (ADS)
Hirata, Yoshito
2016-10-01
I propose a mathematical model for temporal changes of proportions for industrial sectors. I prove that the model keeps the proportions for the primary, the secondary, and the tertiary sectors between 0 and 100% and preserves their total as 100%. The model fits the Japanese historical data between 1950 and 2005 for the population proportions by industry very well. The model also predicts that the proportion for the secondary industry becomes negligible and becomes less than 1% at least around 2080.
Soh, Chang-Heok; Harrington, David P; Zaslavsky, Alan M
2008-03-01
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.
Identifying and modeling safety hazards
DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.
2000-03-29
The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.
2013-01-01
Background In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox’s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically. PMID:23883000
Welfare Returns and Temporary Time Limits: A Proportional Hazard Model
ERIC Educational Resources Information Center
Albert, Vicky N.; King, William C.; Iaci, Ross
2007-01-01
This study analyzes welfare returns for families who leave welfare for a "sit-out" period of 12 months in response to a temporary time limit requirement in Nevada. Findings reveal that relatively few families return for cash assistance after sitting out and that the majority who do return soon after their sit-out period is complete.…
NASA Technical Reports Server (NTRS)
Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.
1998-01-01
New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Models of volcanic eruption hazards
Wohletz, K.H.
1992-06-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
Models of volcanic eruption hazards
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
Modeling multivariate survival data by a semiparametric random effects proportional odds model.
Lam, K F; Lee, Y W; Leung, T L
2002-06-01
In this article, the focus is on the analysis of multivariate survival time data with various types of dependence structures. Examples of multivariate survival data include clustered data and repeated measurements from the same subject, such as the interrecurrence times of cancer tumors. A random effect semiparametric proportional odds model is proposed as an alternative to the proportional hazards model. The distribution of the random effects is assumed to be multivariate normal and the random effect is assumed to act additively to the baseline log-odds function. This class of models, which includes the usual shared random effects model, the additive variance components model, and the dynamic random effects model as special cases, is highly flexible and is capable of modeling a wide range of multivariate survival data. A unified estimation procedure is proposed to estimate the regression and dependence parameters simultaneously by means of a marginal-likelihood approach. Unlike the fully parametric case, the regression parameter estimate is not sensitive to the choice of correlation structure of the random effects. The marginal likelihood is approximated by the Monte Carlo method. Simulation studies are carried out to investigate the performance of the proposed method. The proposed method is applied to two well-known data sets, including clustered data and recurrent event times data.
Modeling lahar behavior and hazards
Manville, Vernon; Major, Jon J.; Fagents, Sarah A.
2013-01-01
Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1996-01-01
In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.
Loeys, T; Goetghebeur, E
2003-03-01
Survival data from randomized trials are most often analyzed in a proportional hazards (PH) framework that follows the intention-to-treat (ITT) principle. When not all the patients on the experimental arm actually receive the assigned treatment, the ITT-estimator mixes its effect on treatment compliers with its absence of effect on noncompliers. The structural accelerated failure time (SAFT) models of Robins and Tsiatis are designed to consistently estimate causal effects on the treated, without direct assumptions about the compliance selection mechanism. The traditional PH-model, however, has not yet led to such causal interpretation. In this article, we examine a PH-model of treatment effect on the treated subgroup. While potential treatment compliance is unobserved in the control arm, we derive an estimating equation for the Compliers PROPortional Hazards Effect of Treatment (C-PROPHET). The jackknife is used for bias correction and variance estimation. The method is applied to data from a recently finished clinical trial in cancer patients with liver metastases.
A Class of Semiparametric Transformation Models for Survival Data with a Cured Proportion
Choi, Sangbum; Huang, Xuelin; Chen, Yi-Hau
2013-01-01
We propose a new class of semiparametric regression models based on a multiplicative frailty assumption with a discrete frailty, which may account for cured subgroup in population. The cure model framework is then recast as a problem with a transformation model. The proposed models can explain a broad range of nonproportional hazards structures along with a cured proportion. An efficient and simple algorithm based on the martingale process is developed to locate the nonparametric maximum likelihood estimator. Unlike existing expectation-maximization based methods, our approach directly maximizes a nonparametric likelihood function, and the calculation of consistent variance estimates is immediate. The proposed method is useful for resolving identifiability features embedded in semiparametric cure models. Simulation studies are presented to demonstrate the finite sample properties of the proposed method. A case study of stage III soft-tissue sarcoma is given as an illustration. PMID:23760878
Modelling boron-lined proportional counter response to neutrons.
Shahri, A; Ghal-Eh, N; Etaati, G R
2013-09-01
The detailed Monte Carlo simulation of a boron-lined proportional counter response to a neutron source has been presented. The MCNP4C and experimental data on different source-moderator geometries have been given for comparison. The influence of different irradiation geometries and boron-lining thicknesses on the detector response has been studied.
Examining Proportional Representation of Ethnic Groups within the SWPBIS Model
ERIC Educational Resources Information Center
Jewell, Kelly
2012-01-01
The quantitative study seeks to analyze if School-wide Positive Behavior Intervention and Support (SWPBIS) model reduces the likelihood that minority students will receive more individualized supports due to behavior problems. In theory, the SWPBIS model should reflect a 3-tier system with tier 1 representing approximately 80%, tier 2 representing…
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Model building in nonproportional hazard regression.
Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad
2013-12-30
Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses.
ERIC Educational Resources Information Center
Misailadou, Christina; Williams, Julian
2003-01-01
We report a study of 10-14 year old children's use of additive strategies while solving ratio and proportion tasks. Rasch methodology was used to develop a diagnostic instrument that reveals children's misconceptions. Two versions of this instrument, one with "models" thought to facilitate proportional reasoning and one without were…
Yi, Grace Y; He, Wenqing
2012-05-01
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.
Experimental Concepts for Testing Seismic Hazard Models
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Jordan, T. H.
2015-12-01
Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.
Kovalchik, Stephanie A; Varadhan, Ravi; Weiss, Carlos O
2013-12-10
Understanding how individuals vary in their response to treatment is an important task of clinical research. For standard regression models, a proportional interactions model first described by Follmann and Proschan (1999) offers a powerful approach for identifying effect modification in a randomized clinical trial when multiple variables influence treatment response. In this paper, we present a framework for using the proportional interactions model in the context of a parallel-arm clinical trial with multiple prespecified candidate effect modifiers. To protect against model misspecification, we propose a selection strategy that considers all possible proportional interactions models. We develop a modified Bonferroni correction for multiple testing that accounts for the positive correlation among candidate models. We describe methods for constructing a confidence interval for the proportionality parameter. In simulation studies, we show that our modified Bonferroni adjustment controls familywise error and has greater power to detect proportional interactions compared with multiplcity-corrected subgroup analyses. We demonstrate our methodology by using the Studies of Left Ventricular Dysfunction Treatment trial, a placebo-controlled randomized clinical trial of the efficacy of enalapril to reduce the risk of death or hospitalization in chronic heart failure patients. An R package called anoint is available for implementing the proportional interactions methodology.
ERIC Educational Resources Information Center
Liu, Xing
2008-01-01
The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…
Sasidharan, Lekshmi; Menéndez, Mónica
2014-11-01
The conventional methods for crash injury severity analyses include either treating the severity data as ordered (e.g. ordered logit/probit models) or non-ordered (e.g. multinomial models). The ordered models require the data to meet proportional odds assumption, according to which the predictors can only have the same effect on different levels of the dependent variable, which is often not the case with crash injury severities. On the other hand, non-ordered analyses completely ignore the inherent hierarchical nature of crash injury severities. Therefore, treating the crash severity data as either ordered or non-ordered results in violating some of the key principles. To address these concerns, this paper explores the application of a partial proportional odds (PPO) model to bridge the gap between ordered and non-ordered severity modeling frameworks. The PPO model allows the covariates that meet the proportional odds assumption to affect different crash severity levels with the same magnitude; whereas the covariates that do not meet the proportional odds assumption can have different effects on different severity levels. This study is based on a five-year (2008-2012) national pedestrian safety dataset for Switzerland. A comparison between the application of PPO models, ordered logit models, and multinomial logit models for pedestrian injury severity evaluation is also included here. The study shows that PPO models outperform the other models considered based on different evaluation criteria. Hence, it is a viable method for analyzing pedestrian crash injury severities.
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
NASA Astrophysics Data System (ADS)
Wright, Vince
2014-03-01
Pirie and Kieren (1989 For the learning of mathematics, 9(3)7-11, 1992 Journal of Mathematical Behavior, 11, 243-257, 1994a Educational Studies in Mathematics, 26, 61-86, 1994b For the Learning of Mathematics, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which learners develop their mathematical understanding. The model was adapted to create the teaching model used in the New Zealand Numeracy Development Projects (Ministry of Education, 2007). A case study of a 3-week sequence of instruction with a group of eight 12- and 13-year-old students provided the data. The teacher/researcher used folding back to materials and images and progressing from materials to imaging to number properties to assist students to develop their understanding of frequencies as proportions. The data show that successful implementation of the model is dependent on the teacher noticing and responding to the layers of understanding demonstrated by the students and the careful selection of materials, problems and situations. It supports the use of the model as a useful part of teachers' instructional strategies and the importance of pedagogical content knowledge to the quality of the way the model is used.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.
2010-01-01
Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636
Satellite image collection modeling for large area hazard emergency response
NASA Astrophysics Data System (ADS)
Liu, Shufan; Hodgson, Michael E.
2016-08-01
Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.
Hazardous gas model evaluation with field observations
NASA Astrophysics Data System (ADS)
Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.
Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas
Regression model estimation of early season crop proportions: North Dakota, some preliminary results
NASA Technical Reports Server (NTRS)
Lin, K. K. (Principal Investigator)
1982-01-01
To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.
Lahar Hazard Modeling at Tungurahua Volcano, Ecuador
NASA Astrophysics Data System (ADS)
Sorensen, O. E.; Rose, W. I.; Jaya, D.
2003-04-01
lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.
Modeling seismic hazard in the Lower Rhine Graben using a fault-based source model
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Vleminckx, Bart; Verbeeck, Koen; Camelbeeck, Thierry
2013-04-01
The Lower Rhine Graben (LRG) is an active tectonic structure in intraplate NW Europe. It is characterized by NW-SE oriented normal faults, and moderate but rather continuous seismic activity. Probabilistic seismic hazard assessments (PHSA) in this region have hitherto been based on area source models, in which the LRG is modeled as a single or a small number of seismotectonic zones, where the occurrence of earthquakes is assumed to be uniform. Hazard engines usually model earthquakes in area sources as point sources or finite ruptures in a horizontal plane at a fixed depth. The past few years, efforts have increasingly been directed to using fault sources in PSHA, in order to obtain more realistic patterns of ground motion. This requires an inventory of all fault sources, and definition of their physical properties (at least length, width, strike, dip, rake, slip rate, and maximum magnitude). The LRG is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults. In the frame of the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/), we have compiled the first parameterized fault model for the LRG that can be used in PSHA studies. We construct the magnitude-frequency distribution (MFD) of each fault from two contributions: 1) up to the largest observed magnitude (M=5.7), we use the MFD determined from the historical and instrumental earthquake catalog, weighted in proportion to the total moment rate, and 2) the frequency of the maximum earthquake predicted by the fault model. We consider the ground-motion prediction equations (GMPE) that were selected in the SHARE project for active shallow crust. This selection includes GMPE's with different distance metrics, the main difference being whether depth of rupture is taken into account or not. Seismic hazard is computed with OpenQuake (http://openquake.org/), an open-source hazard and risk engine that is developed in the frame of the Global
Pineda, M; Weijer, C J; Eftimie, R
2015-04-07
Understanding the mechanisms that control tissue morphogenesis and homeostasis is a central goal not only in developmental biology but also has great relevance for our understanding of various diseases, including cancer. A model organism that is widely used to study the control of tissue morphogenesis and proportioning is the Dictyostelium discoideum. While there are mathematical models describing the role of chemotactic cell motility in the Dictyostelium assembly and morphogenesis of multicellular tissues, as well as models addressing possible mechanisms of proportion regulation, there are no models incorporating both these key aspects of development. In this paper, we introduce a 1D hyperbolic model to investigate the role of two morphogens, DIF and cAMP, on cell movement, cell sorting, cell-type differentiation and proportioning in Dictyostelium discoideum. First, we use the non-spatial version of the model to study cell-type transdifferentiation. We perform a steady-state analysis of it and show that, depending on the shape of the differentiation rate functions, multiple steady-state solutions may occur. Then we incorporate spatial dynamics into the model, and investigate the transdifferentiation and spatial positioning of cells inside the newly formed structures, following the removal of prestalk or prespore regions of a Dictyostelium slug. We show that in isolated prespore fragments, a tipped mound-like aggregate can be formed after a transdifferentiation from prespore to prestalk cells and following the sorting of prestalk cells to the centre of the aggregate. For isolated prestalk fragments, we show the formation of a slug-like structure containing the usual anterior-posterior pattern of prestalk and prespore cells.
Coats, D.W.; Murray, R.C.
1985-08-01
Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. Final seismic hazard models have been published separately by TERA Corporation. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 19 refs., 3 figs., 9 tabs.
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Hybrid internal model control and proportional control of chaotic dynamical systems.
Qi, Dong-lian; Yao, Liang-bin
2004-01-01
A new chaos control method is proposed to take advantage of chaos or avoid it. The hybrid Internal Model Control and Proportional Control learning scheme are introduced. In order to gain the desired robust performance and ensure the system's stability, Adaptive Momentum Algorithms are also developed. Through properly designing the neural network plant model and neural network controller, the chaotic dynamical systems are controlled while the parameters of the BP neural network are modified. Taking the Lorenz chaotic system as example, the results show that chaotic dynamical systems can be stabilized at the desired orbits by this control strategy.
Modelling direct tangible damages due to natural hazards
NASA Astrophysics Data System (ADS)
Kreibich, H.; Bubeck, P.
2012-04-01
Europe has witnessed a significant increase in direct damages from natural hazards. A further damage increase is expected due to the on-going accumulation of people and economic assets in risk-prone areas and the effects of climate change, for instance, on the severity and frequency of drought events in the Mediterranean basin. In order to mitigate the impact of natural hazards an improved risk management based on reliable risk analysis is needed. Particularly, there is still much research effort needed to improve the modelling of damage due to natural hazards. In comparison with hazard modelling, simple approaches still dominate damage assessments, mainly due to limitations in available data and knowledge on damaging processes and influencing factors. Within the EU-project ConHaz, methods as well as data sources and terminology for damage assessments were compiled, systemized and analysed. Similarities and differences between the approaches concerning floods, alpine hazards, coastal hazards and droughts were identified. Approaches for significant improvements of direct tangible damage modelling with a particular focus on cross-hazard-learning will be presented. Examples from different hazards and countries will be given how to improve damage data bases, the understanding of damaging processes, damage models and how to conduct improvements via validations and uncertainty analyses.
Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model
Schmidt, James R.
2016-01-01
In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word “red” in green ink) relative to when they are congruent (e.g., “red” in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains. PMID:27899907
a model based on crowsourcing for detecting natural hazards
NASA Astrophysics Data System (ADS)
Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.
2015-12-01
Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.
Network growth models: A behavioural basis for attachment proportional to fitness
NASA Astrophysics Data System (ADS)
Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris
2017-02-01
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.
Network growth models: A behavioural basis for attachment proportional to fitness
Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris
2017-01-01
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example. PMID:28205599
2015 USGS Seismic Hazard Model for Induced Seismicity
NASA Astrophysics Data System (ADS)
Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Hoover, S. M.; Ellsworth, W. L.; Llenos, A. L.; Michael, A. J.
2015-12-01
Over the past several years, the seismicity rate has increased markedly in multiple areas of the central U.S. Studies have tied the majority of this increased activity to wastewater injection in deep wells and hydrocarbon production. These earthquakes are induced by human activities that change rapidly based on economic and policy decisions, making them difficult to forecast. Our 2014 USGS National Seismic Hazard Model and previous models are intended to provide the long-term hazard (2% probability of exceedance in 50 years) and are based on seismicity rates and patterns observed mostly from tectonic earthquakes. However, potentially induced earthquakes were identified in 14 regions that were not included in the earthquake catalog used for constructing the 2014 model. We recognized the importance of considering these induced earthquakes in a separate hazard analysis, and as a result in April 2015 we released preliminary models that explored the impact of this induced seismicity on the hazard. Several factors are important in determining the hazard from induced seismicity: period of the catalog that optimally forecasts the next year's activity, earthquake magnitude-rate distribution, earthquake location statistics, maximum magnitude, ground motion models, and industrial drivers such as injection rates. The industrial drivers are not currently available in a form that we can implement in a 1-year model. Hazard model inputs have been evaluated by a broad group of scientists and engineers to assess the range of acceptable models. Results indicate that next year's hazard is significantly higher by more than a factor of three in Oklahoma, Texas, and Colorado compared to the long-term 2014 hazard model. These results have raised concern about the impacts of induced earthquakes on the built environment and have led to many engineering and policy discussions about how to mitigate these effects for the more than 7 million people that live near areas of induced seismicity.
Li, Haocheng; Kozey-Keadle, Sarah; Kipnis, Victor; Carroll, Raymond J
2016-01-01
Motivated by physical activity data obtained from the BodyMedia FIT device (www.bodymedia.com), we take a functional data approach for longitudinal studies with continuous proportional outcomes. The functional structure depends on three factors. In our three-factor model, the regression structures are specified as curves measured at various factor-points with random effects that have a correlation structure. The random curve for the continuous factor is summarized using a few important principal components. The difficulties in handling the continuous proportion variables are solved by using a quasilikelihood type approximation. We develop an efficient algorithm to fit the model, which involves the selection of the number of principal components. The method is evaluated empirically by a simulation study. This approach is applied to the BodyMedia data with 935 males and 84 consecutive days of observation, for a total of 78, 540 observations. We show that sleep efficiency increases with increasing physical activity, while its variance decreases at the same time.
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-09-25
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.
Bejan-Angoulvant, Theodora; Bouvier, Anne-Marie; Bossard, Nadine; Belot, Aurelien; Jooste, Valérie; Launoy, Guy; Remontet, Laurent
2008-01-01
Hazard regression models and cure rate models can be advantageously used in cancer relative survival analysis. We explored the advantages and limits of these two models in colon cancer and focused on the prognostic impact of the year of diagnosis on survival according to the TNM stage at diagnosis. The analysis concerned 9,998 patients from three French registries. In the hazard regression model, the baseline excess death hazard and the time-dependent effects of covariates were modelled using regression splines. The cure rate model estimated the proportion of 'cured' patients and the excess death hazard in 'non-cured' patients. The effects of year of diagnosis on these parameters were estimated for each TNM cancer stage. With the hazard regression model, the excess death hazard decreased significantly with more recent years of diagnoses (hazard ratio, HR 0.97 in stage III and 0.98 in stage IV, P < 0.001). In these advanced stages, this favourable effect was limited to the first years of follow-up. With the cure rate model, recent years of diagnoses were significantly associated with longer survivals in 'non-cured' patients with advanced stages (HR 0.95 in stage III and 0.97 in stage IV, P < 0.001) but had no significant effect on cure (odds ratio, OR 0.99 in stages III and IV, P > 0.5). The two models were complementary and concordant in estimating colon cancer survival and the effects of covariates. They provided two different points of view of the same phenomenon: recent years of diagnosis had a favourable effect on survival, but not on cure.
NASA Astrophysics Data System (ADS)
Verma, Rahul K.; Ogihara, Yuki; Kuwabara, Toshihiko; Chung, Kwansoo
2011-08-01
In this work, as non-proportional/non-monotonous deformation experiments, two-stage and tension-compression-tension uniaxial tests were performed, respectively, for a cold rolled ultra high strength dual phase steel sheet: DP780. Deformation behaviors under such deformation paths were found different than those of the ultra low carbon single phase steels observed by Verma et al. (Int. J. Plast. 2011, 82-101). To model the newly observed deformation behaviors, the combined type constitutive law previously proposed by Verma et al. (Int. J. Plast. 2011, 82-101) was successfully applied here. Permanent softening observed during reverse loading was properly characterized into the isotropic and kinematic hardening parts of the hardening law using tension-compression-tension test data. The cross effect observed in two-stage tests was also effectively incorporated into the constitutive law.
A high-resolution global flood hazard model
NASA Astrophysics Data System (ADS)
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
Probabilistic modelling of rainfall induced landslide hazard assessment
NASA Astrophysics Data System (ADS)
Kawagoe, S.; Kazama, S.; Sarukkalige, P. R.
2010-06-01
To evaluate the frequency and distribution of landslides hazards over Japan, this study uses a probabilistic model based on multiple logistic regression analysis. Study particular concerns several important physical parameters such as hydraulic parameters, geographical parameters and the geological parameters which are considered to be influential in the occurrence of landslides. Sensitivity analysis confirmed that hydrological parameter (hydraulic gradient) is the most influential factor in the occurrence of landslides. Therefore, the hydraulic gradient is used as the main hydraulic parameter; dynamic factor which includes the effect of heavy rainfall and their return period. Using the constructed spatial data-sets, a multiple logistic regression model is applied and landslide hazard probability maps are produced showing the spatial-temporal distribution of landslide hazard probability over Japan. To represent the landslide hazard in different temporal scales, extreme precipitation in 5 years, 30 years, and 100 years return periods are used for the evaluation. The results show that the highest landslide hazard probability exists in the mountain ranges on the western side of Japan (Japan Sea side), including the Hida and Kiso, Iide and the Asahi mountainous range, the south side of Chugoku mountainous range, the south side of Kyusu mountainous and the Dewa mountainous range and the Hokuriku region. The developed landslide hazard probability maps in this study will assist authorities, policy makers and decision makers, who are responsible for infrastructural planning and development, as they can identify landslide-susceptible areas and thus decrease landslide damage through proper preparation.
Wang, Jun-Wei; Wu, Huai-Ning; Li, Han-Xiong
2012-06-01
In this paper, a distributed fuzzy control design based on Proportional-spatial Derivative (P-sD) is proposed for the exponential stabilization of a class of nonlinear spatially distributed systems described by parabolic partial differential equations (PDEs). Initially, a Takagi-Sugeno (T-S) fuzzy parabolic PDE model is proposed to accurately represent the nonlinear parabolic PDE system. Then, based on the T-S fuzzy PDE model, a novel distributed fuzzy P-sD state feedback controller is developed by combining the PDE theory and the Lyapunov technique, such that the closed-loop PDE system is exponentially stable with a given decay rate. The sufficient condition on the existence of an exponentially stabilizing fuzzy controller is given in terms of a set of spatial differential linear matrix inequalities (SDLMIs). A recursive algorithm based on the finite-difference approximation and the linear matrix inequality (LMI) techniques is also provided to solve these SDLMIs. Finally, the developed design methodology is successfully applied to the feedback control of the Fitz-Hugh-Nagumo equation.
ERIC Educational Resources Information Center
Wright, Vince
2014-01-01
Pirie and Kieren (1989 "For the learning of mathematics", 9(3)7-11, 1992 "Journal of Mathematical Behavior", 11, 243-257, 1994a "Educational Studies in Mathematics", 26, 61-86, 1994b "For the Learning of Mathematics":, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which…
Model Uncertainty, Earthquake Hazard, and the WGCEP-2002 Forecast
NASA Astrophysics Data System (ADS)
Page, M. T.; Carlson, J. M.
2005-12-01
Model uncertainty is prevalent in Probabilistic Seismic Hazard Analysis (PSHA) because the true mechanism generating risk is unknown. While it is well-understood how to incorporate parameter uncertainty in PSHA, model uncertainty is more difficult to incorporate due to the high degree of dependence between different earthquake-recurrence models. We find that the method used by the 2002 Working Group on California Earthquake Probabilities (WG02) to combine the probability distributions given by multiple models has several adverse effects on their result. In particular, taking a linear combination of the various models ignores issues of model dependence and leads to large uncertainties in the final hazard estimate. Furthermore, choosing model weights based on data can systematically bias the final probability distribution. The weighting scheme of the WG02 report also depends upon an arbitrary ordering of models. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA.
Sun, Mei zhen; van Rijn, Clementina M; Liu, Yu xi; Wang, Ming zheng
2002-09-01
Rational polypharmacy of antiepileptic drugs is one of the treatment strategies for refractory epilepsy. To investigate whether it may be rational to combine carbamazepine (CBZ) and valproate (VPA), we tested both the anti-convulsant effect and the toxicity of combinations of CBZ and VPA in different dose proportions. CBZ/VPA dose ratios were, respectively, 1:6.66, 1:10, 1:13.3 and 1:20. The median effect doses of monotherapy and polytherapy in maximal electroshock seizure test and the median lethal (within 3 days after administration) doses were determined. These parameters were analyzed with the isobologram method. We found that the anti-convulsant effect of all combinations was additive. The toxicity of combination 1, 2 and 3 (CBZ/VPA, 1:6.66, 1:10, 1:13.3) was additive, but the toxicity of combination 4 (CBZ/VPA, 1:20) was infra-additive. Thus, in mice, using this model, a combination of CBZ/VPA 1:20 has an advantage over each of the drugs alone.
Agent-based Modeling with MATSim for Hazards Evacuation Planning
NASA Astrophysics Data System (ADS)
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
Wind Shear Modeling for Aircraft Hazard Definition.
1978-02-01
11 . Lewellen , W. S., G. G. Will iamson , and N. E . Teske . “Es tima tes of the Low Level Win d Shear and Turbulence in the Vicinity of Kennedy...E. Teske . “Model Predictions of Wind and Turbines Profiles Associated wi th an Ensemble of Aircraf t Accidents ,” NASA CR-2884, July 1977. 37 2—21
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Self-organization, the cascade model, and natural hazards
Turcotte, Donald L.; Malamud, Bruce D.; Guzzetti, Fausto; Reichenbach, Paola
2002-01-01
We consider the frequency-size statistics of two natural hazards, forest fires and landslides. Both appear to satisfy power-law (fractal) distributions to a good approximation under a wide variety of conditions. Two simple cellular-automata models have been proposed as analogs for this observed behavior, the forest fire model for forest fires and the sand pile model for landslides. The behavior of these models can be understood in terms of a self-similar inverse cascade. For the forest fire model the cascade consists of the coalescence of clusters of trees; for the sand pile model the cascade consists of the coalescence of metastable regions. PMID:11875206
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Ned; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
Modeling of Marine Natural Hazards in the Lesser Antilles
NASA Astrophysics Data System (ADS)
Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim
2010-05-01
The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I
Helping Children to Model Proportionally in Group Argumentation: Overcoming the "Constant Sum" Error
ERIC Educational Resources Information Center
Misailidou, Christina; Williams, Jullian
2004-01-01
We examine eight cases of argumentation in relation to a proportional reasoning task--the "Paint" task--in which the "constant sum" strategy was a significant factor. Our analysis of argument follows Toulmin's (1958) approach and in the discourse we trace factors which seem to facilitate changes in argument. We find that the arguments of "constant…
Proportionality: Modeling the Future. NASA Connect: Program 6 in the 1999-2000 Series.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.
This teaching unit is designed to help students in grades 4-8 explore the concepts of scaling and proportion in the context of spacecraft design. The units in this series have been developed to enhance and enrich mathematics, science, and technology education and to accommodate different teaching and learning styles. Each unit consists of a…
Three multimedia models used at hazardous and radioactive waste sites
1996-01-01
The report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. The study focused on three specific models: MEPAS version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. The approach to model review advocated in the study is directed to technical staff responsible for identifying, selecting and applying multimedia models for use at sites containing radioactive and hazardous materials. In the report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted.
Babapour, R; Naghdi, R; Ghajar, I; Ghodsi, R
2015-07-01
Rock proportion of subsoil directly influences the cost of embankment in forest road construction. Therefore, developing a reliable framework for rock ratio estimation prior to the road planning could lead to more light excavation and less cost operations. Prediction of rock proportion was subjected to statistical analyses using the application of Artificial Neural Network (ANN) in MATLAB and five link functions of ordinal logistic regression (OLR) according to the rock type and terrain slope properties. In addition to bed rock and slope maps, more than 100 sample data of rock proportion were collected, observed by geologists, from any available bed rock of every slope class. Four predictive models were developed for rock proportion, employing independent variables and applying both the selected probit link function of OLR and Layer Recurrent and Feed forward back propagation networks of Neural Networks. In ANN, different numbers of neurons are considered for the hidden layer(s). Goodness of the fit measures distinguished that ANN models produced better results than OLR with R (2) = 0.72 and Root Mean Square Error = 0.42. Furthermore, in order to show the applicability of the proposed approach, and to illustrate the variability of rock proportion resulted from the model application, the optimum models were applied to a mountainous forest in where forest road network had been constructed in the past.
Current Methods of Natural Hazards Communication used within Catastrophe Modelling
NASA Astrophysics Data System (ADS)
Dawber, C.; Latchman, S.
2012-04-01
In the field of catastrophe modelling, natural hazards need to be explained every day to (re)insurance professionals so that they may understand estimates of the loss potential of their portfolio. The effective communication of natural hazards to city professionals requires different strategies to be taken depending on the audience, their prior knowledge and respective backgrounds. It is best to have at least three sets of tools in your arsenal for a specific topic, 1) an illustration/animation, 2) a mathematical formula and 3) a real world case study example. This multi-faceted approach will be effective for those that learn best by pictorial means, mathematical means or anecdotal means. To show this we will use a set of real examples employed in the insurance industry of how different aspects of natural hazards and the uncertainty around them are explained to city professionals. For example, explaining the different modules within a catastrophe model such as the hazard, vulnerability and loss modules. We highlight how recent technology such as 3d plots, video recording and Google Earth maps, when used properly can help explain concepts quickly and easily. Finally we also examine the pitfalls of using overly-complicated visualisations and in general how counter-intuitive deductions may be made.
Probabilistic modelling of rainfall induced landslide hazard assessment
NASA Astrophysics Data System (ADS)
Kawagoe, S.; Kazama, S.; Sarukkalige, P. R.
2010-01-01
To evaluate the frequency and distribution of landslides hazards over Japan, this study uses a probabilistic model based on multiple logistic regression analysis. Study particular concerns several important physical parameters such as hydraulic parameters, geographical parameters and the geological parameters which are considered to be influential in the occurrence of landslides. Sensitivity analysis confirmed that hydrological parameter (hydraulic gradient) is the most influential factor in the occurrence of landslides. Therefore, the hydraulic gradient is used as the main hydraulic parameter; dynamic factor which includes the effect of heavy rainfall and their return period. Using the constructed spatial data-sets, a multiple logistic regression model is applied and landslide susceptibility maps are produced showing the spatial-temporal distribution of landslide hazard susceptibility over Japan. To represent the susceptibility in different temporal scales, extreme precipitation in 5 years, 30 years, and 100 years return periods are used for the evaluation. The results show that the highest landslide hazard susceptibility exists in the mountain ranges on the western side of Japan (Japan Sea side), including the Hida and Kiso, Iide and the Asahi mountainous range, the south side of Chugoku mountainous range, the south side of Kyusu mountainous and the Dewa mountainous range and the Hokuriku region. The developed landslide hazard susceptibility maps in this study will assist authorities, policy makers and decision makers, who are responsible for infrastructural planning and development, as they can identify landslide-susceptible areas and thus decrease landslide damage through proper preparation.
Rockfall hazard analysis using LiDAR and spatial modeling
NASA Astrophysics Data System (ADS)
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
Disproportionate Proximity to Environmental Health Hazards: Methods, Models, and Measurement
Maantay, Juliana A.; Brender, Jean D.
2011-01-01
We sought to provide a historical overview of methods, models, and data used in the environmental justice (EJ) research literature to measure proximity to environmental hazards and potential exposure to their adverse health effects. We explored how the assessment of disproportionate proximity and exposure has evolved from comparing the prevalence of minority or low-income residents in geographic entities hosting pollution sources and discrete buffer zones to more refined techniques that use continuous distances, pollutant fate-and-transport models, and estimates of health risk from toxic exposure. We also reviewed analytical techniques used to determine the characteristics of people residing in areas potentially exposed to environmental hazards and emerging geostatistical techniques that are more appropriate for EJ analysis than conventional statistical methods. We concluded by providing several recommendations regarding future research and data needs for EJ assessment that would lead to more reliable results and policy solutions. PMID:21836113
Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R
2014-11-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.
Recent Experiences in Aftershock Hazard Modelling in New Zealand
NASA Astrophysics Data System (ADS)
Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.
2014-12-01
The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock
NASA Astrophysics Data System (ADS)
Bajo Sanchez, Jorge V.
This dissertation is composed of an introductory chapter and three papers about vulnerability and volcanic hazard maps with emphasis on lahars. The introductory chapter reviews definitions of the term vulnerability by the social and natural hazard community and it provides a new definition of hazard vulnerability that includes social and natural hazard factors. The first paper explains how the Community Volcanic Hazard Map (CVHM) is used for vulnerability analysis and explains in detail a new methodology to obtain valuable information about ethnophysiographic differences, hazards, and landscape knowledge of communities in the area of interest: the Canton Buenos Aires situated on the northern flank of the Santa Ana (Ilamatepec) Volcano, El Salvador. The second paper is about creating a lahar hazard map in data poor environments by generating a landslide inventory and obtaining potential volumes of dry material that can potentially be carried by lahars. The third paper introduces an innovative lahar hazard map integrating information generated by the previous two papers. It shows the differences in hazard maps created by the communities and experts both visually as well as quantitatively. This new, integrated hazard map was presented to the community with positive feedback and acceptance. The dissertation concludes with a summary chapter on the results and recommendations.
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
Li Yupeng Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.
Development of hazard-compatible building fragility and vulnerability models
Karaca, E.; Luco, N.
2008-01-01
We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.
Techniques for modeling hazardous air pollutant emissions from landfills
Lang, R.J.; Vigil, S.A.; Melcer, H.
1998-12-31
The Environmental Protection Agency`s Landfill Air Estimation Model (LAEEM), combined with either the AP-42 or CAA landfill emission factors, provide a basis to predict air emissions, including hazardous air pollutants (HAPs), from municipal solid waste landfills. This paper presents alternative approaches for estimating HAP emissions from landfills. These approaches include analytical solutions and estimation techniques that account for convection, diffusion, and biodegradation of HAPs. Results from the modeling of a prototypical landfill are used as the basis for discussion with respect to LAEEM results
A Computerized Prediction Model of Hazardous Inflammatory Platelet Transfusion Outcomes
Nguyen, Kim Anh; Hamzeh-Cognasse, Hind; Sebban, Marc; Fromont, Elisa; Chavarin, Patricia; Absi, Lena; Pozzetto, Bruno; Cognasse, Fabrice; Garraud, Olivier
2014-01-01
Background Platelet component (PC) transfusion leads occasionally to inflammatory hazards. Certain BRMs that are secreted by the platelets themselves during storage may have some responsibility. Methodology/Principal Findings First, we identified non-stochastic arrangements of platelet-secreted BRMs in platelet components that led to acute transfusion reactions (ATRs). These data provide formal clinical evidence that platelets generate secretion profiles under both sterile activation and pathological conditions. We next aimed to predict the risk of hazardous outcomes by establishing statistical models based on the associations of BRMs within the incriminated platelet components and using decision trees. We investigated a large (n = 65) series of ATRs after platelet component transfusions reported through a very homogenous system at one university hospital. Herein, we used a combination of clinical observations, ex vivo and in vitro investigations, and mathematical modeling systems. We calculated the statistical association of a large variety (n = 17) of cytokines, chemokines, and physiologically likely factors with acute inflammatory potential in patients presenting with severe hazards. We then generated an accident prediction model that proved to be dependent on the level (amount) of a given cytokine-like platelet product within the indicated component, e.g., soluble CD40-ligand (>289.5 pg/109 platelets), or the presence of another secreted factor (IL-13, >0). We further modeled the risk of the patient presenting either a febrile non-hemolytic transfusion reaction or an atypical allergic transfusion reaction, depending on the amount of the chemokine MIP-1α (<20.4 or >20.4 pg/109 platelets, respectively). Conclusions/Significance This allows the modeling of a policy of risk prevention for severe inflammatory outcomes in PC transfusion. PMID:24830754
Application of hazard models for patients with breast cancer in Cuba
Alfonso, Anet Garcia; de Oca, Néstor Arcia Montes
2011-01-01
There has been a rapid development in hazard models and survival analysis in the last decade. This article aims to assess the overall survival time of breast cancer in Cuba, as well as to determine plausible factors that may have a significant impact in the survival time. The data are obtained from the National Cancer Register of Cuba. The data set used in this study relates to 6381 patients diagnosed with breast cancer between January 2000 and December 2002. Follow-up data are available until the end of December 2007, by which time 2167 (33.9%) had died and 4214 (66.1%) were still alive. The adequacy of six parametric models is assessed by using their Akaike information criterion values. Five of the six parametric models (Exponential, Weibull, Log-logistic, Lognormal, and Generalized Gamma) are parameterized by using the accelerated failure-time metric, and the Gompertz model is parameterized by using the proportional hazard metric. The main result in terms of survival is found for the different categories of the clinical stage covariate. The survival time among patients who have been diagnosed at early stage of breast cancer is about 60% higher than the one among patients diagnosed at more advanced stage of the disease. Differences among provinces have not been found. The age is another significant factor, but there is no important difference between patient ages. PMID:21686138
Sinkhole hazard assessment in Minnesota using a decision tree model
NASA Astrophysics Data System (ADS)
Gao, Yongli; Alexander, E. Calvin
2008-05-01
An understanding of what influences sinkhole formation and the ability to accurately predict sinkhole hazards is critical to environmental management efforts in the karst lands of southeastern Minnesota. Based on the distribution of distances to the nearest sinkhole, sinkhole density, bedrock geology and depth to bedrock in southeastern Minnesota and northwestern Iowa, a decision tree model has been developed to construct maps of sinkhole probability in Minnesota. The decision tree model was converted as cartographic models and implemented in ArcGIS to create a preliminary sinkhole probability map in Goodhue, Wabasha, Olmsted, Fillmore, and Mower Counties. This model quantifies bedrock geology, depth to bedrock, sinkhole density, and neighborhood effects in southeastern Minnesota but excludes potential controlling factors such as structural control, topographic settings, human activities and land-use. The sinkhole probability map needs to be verified and updated as more sinkholes are mapped and more information about sinkhole formation is obtained.
Mark-specific hazard ratio model with missing multivariate marks.
Juraska, Michal; Gilbert, Peter B
2016-10-01
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial.
Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Giustarini, Laura; Chini, Marci; Hostache, Renaud; Matgen, Patrick; Pappenberger, Florian; Bally, Phillippe
2015-04-01
With flood consequences likely to amplify because of growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are greatly needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method is presented to integrate global flood inundation modeling and microwave remote sensing. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers the opportunity to estimate flood non-exceedance probabilities in a robust way. The probabilities can later be attributed to historical satellite observations. SAR-derived flood extent maps with their associated non-exceedance probabilities are then combined to generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. The method can be applied to any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. We applied the method on the Severn River (UK) and on the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. An additional analysis has been performed on the Severn River, using high resolution SAR data from the COSMO-SkyMed SAR constellation, acquired for a single flood event (one flood map per day between 27/11/2012 and 4/12/2012). The results showed that it is vital to observe the peak of the flood. However, a single
Simple model relating recombination rates and non-proportional light yield in scintillators
Moses, William W.; Bizarri, Gregory; Singh, Jai; Vasil'ev, Andrey N.; Williams, Richard T.
2008-09-24
We present a phenomenological approach to derive an approximate expression for the local light yield along a track as a function of the rate constants of different kinetic orders of radiative and quenching processes for excitons and electron-hole pairs excited by an incident {gamma}-ray in a scintillating crystal. For excitons, the radiative and quenching processes considered are linear and binary, and for electron-hole pairs a ternary (Auger type) quenching process is also taken into account. The local light yield (Y{sub L}) in photons per MeV is plotted as a function of the deposited energy, -dE/dx (keV/cm) at any point x along the track length. This model formulation achieves a certain simplicity by using two coupled rate equations. We discuss the approximations that are involved. There are a sufficient number of parameters in this model to fit local light yield profiles needed for qualitative comparison with experiment.
2014-09-26
rule expressed as inequalities . . . . 8 1-3. The relationship between Top-down and Bottom-up modeling ..... ............... 11 1-4. Fu web grammar ...aspect of Fu’s research is the concept of using web grammars to describe an object. The grammer is an abstract representation of an object’s components...level being the object and the subordinate level being the compositional nature of the object. This grammar technique can prove to be useful in
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality
Aberg, Daniel; Sadigh, Babak; Zhou, Fei
2015-01-01
This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.
Natural hazard resilient cities: the case of a SSMS model
NASA Astrophysics Data System (ADS)
Santos-Reyes, Jaime
2010-05-01
Modern society is characterised by complexity; i.e. technical systems are highly complex and highly interdependent. The nature of the interdependence amongst these systems has become an issue on increasing importance in recent years. Moreover, these systems face a number threats ranging from technical, human and natural. For example, natural hazards (earthquakes, floods, heavy snow, etc) can cause significant problems and disruption to normal life. On the other hand, modern society depends on highly interdependent infrastructures such as transport (rail, road, air, etc), telecommunications, power and water supply, etc. Furthermore, in many cases there is no single owner, operator, and regulator of such systems. Any disruption in any of the interconnected systems may cause a domino-effect. The domino-effect may occur at local, regional or at national level; or, in some cases; it may be extended across international borders. Given the above, it may be argued that society is less resilient to such events and therefore there is a need to have a system in place able to maintain risk within an acceptable range, whatever that might be. This paper presents the modelling process of the interdependences amongst "critical infrastructures" (i.e. transport, telecommunications, power & water supply, etc) for a typical city. The approach has been the application of the developed Systemic Safety Management System (SSMS) model. The main conclusion is that the SSMS model has the potentiality to be used to model interdependencies amongst the so called "critical infrastructures". It is hoped that the approach presented in this paper may help to gain a better understanding of the interdependence amongst these systems and may contribute to a resilient society when disrupted by natural hazards.
Hazard based models for freeway traffic incident duration.
Tavassoli Hojati, Ahmad; Ferreira, Luis; Washington, Simon; Charles, Phil
2013-03-01
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Modeling and mitigating natural hazards: Stationarity is immortal!
NASA Astrophysics Data System (ADS)
Montanari, Alberto; Koutsoyiannis, Demetris
2014-12-01
Environmental change is a reason of relevant concern as it is occurring at an unprecedented pace and might increase natural hazards. Moreover, it is deemed to imply a reduced representativity of past experience and data on extreme hydroclimatic events. The latter concern has been epitomized by the statement that "stationarity is dead." Setting up policies for mitigating natural hazards, including those triggered by floods and droughts, is an urgent priority in many countries, which implies practical activities of management, engineering design, and construction. These latter necessarily need to be properly informed, and therefore, the research question on the value of past data is extremely important. We herein argue that there are mechanisms in hydrological systems that are time invariant, which may need to be interpreted through data inference. In particular, hydrological predictions are based on assumptions which should include stationarity. In fact, any hydrological model, including deterministic and nonstationary approaches, is affected by uncertainty and therefore should include a random component that is stationary. Given that an unnecessary resort to nonstationarity may imply a reduction of predictive capabilities, a pragmatic approach, based on the exploitation of past experience and data is a necessary prerequisite for setting up mitigation policies for environmental risk.
VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation
NASA Astrophysics Data System (ADS)
Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.
2009-12-01
Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the
Lava flow hazard at Nyiragongo volcano, D.R.C.. 1. Model calibration and hazard mapping
NASA Astrophysics Data System (ADS)
Favalli, Massimiliano; Chirico, Giuseppe D.; Papale, Paolo; Pareschi, Maria Teresa; Boschi, Enzo
2009-05-01
The 2002 eruption of Nyiragongo volcano constitutes the most outstanding case ever of lava flow in a big town. It also represents one of the very rare cases of direct casualties from lava flows, which had high velocities of up to tens of kilometer per hour. As in the 1977 eruption, which is the only other eccentric eruption of the volcano in more than 100 years, lava flows were emitted from several vents along a N-S system of fractures extending for more than 10 km, from which they propagated mostly towards Lake Kivu and Goma, a town of about 500,000 inhabitants. We assessed the lava flow hazard on the entire volcano and in the towns of Goma (D.R.C.) and Gisenyi (Rwanda) through numerical simulations of probable lava flow paths. Lava flow paths are computed based on the steepest descent principle, modified by stochastically perturbing the topography to take into account the capability of lava flows to override topographic obstacles, fill topographic depressions, and spread over the topography. Code calibration and the definition of the expected lava flow length and vent opening probability distributions were done based on the 1977 and 2002 eruptions. The final lava flow hazard map shows that the eastern sector of Goma devastated in 2002 represents the area of highest hazard on the flanks of the volcano. The second highest hazard sector in Goma is the area of propagation of the western lava flow in 2002. The town of Gisenyi is subject to moderate to high hazard due to its proximity to the alignment of fractures active in 1977 and 2002. In a companion paper (Chirico et al., Bull Volcanol, in this issue, 2008) we use numerical simulations to investigate the possibility of reducing lava flow hazard through the construction of protective barriers, and formulate a proposal for the future development of the town of Goma.
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
2016-11-01
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Preliminary deformation model for National Seismic Hazard map of Indonesia
Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z.; Susilo,; Efendi, Joni
2015-04-24
Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.
Schiller, Steven R.; Warren, Mashuri L.; Auslander, David M.
1980-11-01
In this paper, common control strategies used to regulate the flow of liquid through flat-plate solar collectors are discussed and evaluated using a dynamic collector model. Performance of all strategies is compared using different set points, flow rates, insolation levels and patterns, and ambient temperature conditions. The unique characteristic of the dynamic collector model is that it includes the effect of collector capacitance. Short term temperature response and the energy-storage capability of collector capacitance are shown to play significant roles in comparing on/off and proportional controllers. Inclusion of these effects has produced considerably more realistic simulations than any generated by steady-state models. Finally, simulations indicate relative advantages and disadvantages of both types of controllers, conditions under which each performs better, and the importance of pump cycling and controller set points on total energy collection.
Hydraulic modeling for lahar hazards at cascades volcanoes
Costa, J.E.
1997-01-01
The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.
Research collaboration, hazard modeling and dissemination in volcanology with Vhub
NASA Astrophysics Data System (ADS)
Palma Lizana, J. L.; Valentine, G. A.
2011-12-01
Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University
Landslide-Generated Tsunami Model for Quick Hazard Assessment
NASA Astrophysics Data System (ADS)
Franz, M.; Rudaz, B.; Locat, J.; Jaboyedoff, M.; Podladchikov, Y.
2015-12-01
Alpine regions are likely to be areas at risk regarding to landslide-induced tsunamis, because of the proximity between lakes and potential instabilities and due to the concentration of the population in valleys and on the lakes shores. In particular, dam lakes are often surrounded by steep slopes and frequently affect the stability of the banks. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a 2.5D numerical model which aims to simulate the propagation of the landslide, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. To perform this task, the process is done in three steps. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The proper behavior of our model is demonstrated by; (1) numerical tests from Toro (2001), and (2) by comparison with a real event where the horizontal run-up distance is known (Nicolet landslide, Quebec, Canada). The model is of particular interest due to its ability to perform quickly the 2.5D geometric model of the landslide, the tsunami simulation and, consequently, the hazard assessment.
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
A mental models approach to exploring perceptions of hazardous processes
Bostrom, A.H.H.
1990-01-01
Based on mental models theory, a decision-analytic methodology is developed to elicit and represent perceptions of hazardous processes. An application to indoor radon illustrates the methodology. Open-ended interviews were used to elicit non-experts' perceptions of indoor radon, with explicit prompts for knowledge about health effects, exposure processes, and mitigation. Subjects then sorted photographs into radon-related and unrelated piles, explaining their rationale aloud as they sorted. Subjects demonstrated a small body of correct but often unspecific knowledge about exposure and effects processes. Most did not mention radon-decay processes, and seemed to rely on general knowledge about gases, radioactivity, or pollution to make inferences about radon. Some held misconceptions about contamination and health effects resulting from exposure to radon. In two experiments, subjects reading brochures designed according to the author's guidelines outperformed subjects reading a brochure distributed by the EPA on a diagnostic test, and did at least as well on an independently designed quiz. In both experiments, subjects who read any one of the brochures had more complete and correct knowledge about indoor radon than subjects who did not, whose knowledge resembled the radon-interview subjects'.
Evaluating the hazard from Siding Spring dust: Models and predictions
NASA Astrophysics Data System (ADS)
Christou, A.
2014-12-01
Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis
2016-01-01
There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586
Modelling Inland Flood Events for Hazard Maps in Taiwan
NASA Astrophysics Data System (ADS)
Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.
2015-12-01
Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage
Conceptual geoinformation model of natural hazards risk assessment
NASA Astrophysics Data System (ADS)
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
The influence of mapped hazards on risk beliefs: a proximity-based modeling approach.
Severtson, Dolores J; Burt, James E
2012-02-01
Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated "you live here" location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, for example, distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples.
Modelling the costs of natural hazards in games
NASA Astrophysics Data System (ADS)
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban
NASA Astrophysics Data System (ADS)
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
Expert elicitation for a national-level volcano hazard model
NASA Astrophysics Data System (ADS)
Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill
2016-04-01
The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.
Mozo, I; Lesage, G; Yin, J; Bessiere, Y; Barna, L; Sperandio, M
2012-10-15
The aerobic biological process is one of the best technologies available for removing hazardous organic substances from industrial wastewaters. But in the case of volatile organic compounds (benzene, toluene, ethylbenzene, p-xylene, naphthalene), volatilization can contribute significantly to their removal from the liquid phase. One major issue is to predict the competition between volatilization and biodegradation in biological process depending on the target molecule. The aim of this study was to develop an integrated dynamic model to evaluate the influence of operating conditions, kinetic parameters and physical properties of the molecule on the main pathways (biodegradation and volatilization) for the removal of Volatile Organic Compounds (VOC). After a comparison with experimental data, sensitivity studies were carried out in order to optimize the aerated biological process. Acclimatized biomass growth is limited by volatilization, which reduces the bioavailability of the substrate. Moreover, the amount of biodegraded substrate is directly proportional to the amount of active biomass stabilized in the process. Model outputs predict that biodegradation is enhanced at high SRT for molecules with low H and with a high growth rate population. Air flow rate should be optimized to meet the oxygen demand and to minimize VOC stripping. Finally, the feeding strategy was found to be the most influential operating parameter that should be adjusted in order to enhance VOC biodegradation and to limit their volatilization in sequencing batch reactors (SBR).
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-01-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563
NASA Astrophysics Data System (ADS)
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-06-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-06-07
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen's neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Castronovo, A Margherita; Negro, Francesco; Farina, Dario
2015-01-01
Motor neurons in the spinal cord receive synaptic input that comprises common and independent components. The part of synaptic input that is common to all motor neurons is the one regulating the production of force. Therefore, its quantification is important to assess the strategy used by Central Nervous System (CNS) to control and regulate movements, especially in physiological conditions such as fatigue. In this study we present and validate a method to estimate the ratio between strengths of common and independent inputs to motor neurons and we apply this method to investigate its changes during fatigue. By means of coherence analysis we estimated the level of correlation between motor unit spike trains at the beginning and at the end of fatiguing contractions of the Tibialis Anterior muscle at three different force targets. Combining theoretical modeling and experimental data we estimated the strength of the common synaptic input with respect to the independent one. We observed a consistent increase in the proportion of the shared input to motor neurons during fatigue. This may be interpreted as a strategy used by the CNS to counteract the occurrence of fatigue and the concurrent decrease of generated force.
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
NASA Astrophysics Data System (ADS)
Wang, Jun-Song; Wang, Mei-Li; Li, Xiao-Li; Ernst, Niebur
2015-03-01
Epilepsy is believed to be caused by a lack of balance between excitation and inhibitation in the brain. A promising strategy for the control of the disease is closed-loop brain stimulation. How to determine the stimulation control parameters for effective and safe treatment protocols remains, however, an unsolved question. To constrain the complex dynamics of the biological brain, we use a neural population model (NPM). We propose that a proportional-derivative (PD) type closed-loop control can successfully suppress epileptiform activities. First, we determine the stability of root loci, which reveals that the dynamical mechanism underlying epilepsy in the NPM is the loss of homeostatic control caused by the lack of balance between excitation and inhibition. Then, we design a PD type closed-loop controller to stabilize the unstable NPM such that the homeostatic equilibriums are maintained; we show that epileptiform activities are successfully suppressed. A graphical approach is employed to determine the stabilizing region of the PD controller in the parameter space, providing a theoretical guideline for the selection of the PD control parameters. Furthermore, we establish the relationship between the control parameters and the model parameters in the form of stabilizing regions to help understand the mechanism of suppressing epileptiform activities in the NPM. Simulations show that the PD-type closed-loop control strategy can effectively suppress epileptiform activities in the NPM. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473208, 61025019, and 91132722), ONR MURI N000141010278, and NIH grant R01EY016281.
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Odbert, Henry; Phillips, Jeremy; Woodhouse, Mark; Sandri, Laura; Selva, Jacopo; Marzocchi, Warner
2016-04-01
Quantification of volcanic hazards is a challenging task for modern volcanology. Assessing the large uncertainties involved in the hazard analysis requires the combination of volcanological data, physical and statistical models. This is a complex procedure even when taking into account only one type of volcanic hazard. However, volcanic systems are known to be multi-hazard environments where several hazardous phenomena (tephra fallout, Pyroclastic Density Currents -PDCs-, lahars, etc.) may occur whether simultaneous or sequentially. Bayesian Belief Networks (BBNs) are a flexible and powerful way of modelling uncertainty. They are statistical models that can merge information coming from data, physical models, other statistical models or expert knowledge into a unified probabilistic assessment. Therefore, they can be applied to model the interaction between different volcanic hazards in an efficient manner. In this work, we design and preliminarily parametrize a BBN with the aim of forecasting the occurrence and volume of rain-triggered lahars when considering: (1) input of pyroclastic material, in the form of tephra fallout and PDCs, over the catchments around the volcano; (2) remobilization of this material by antecedent lahar events. Input of fresh pyroclastic material can be modelled through a combination of physical models (e.g. advection-diffusion models for tephra fallout such as HAZMAP and shallow-layer continuum models for PDCs such as Titan2D) and uncertainty quantification techniques, while the remobilization efficiency can be constrained from datasets of lahar observations at different volcanoes. The applications of this kind of probabilistic multi-hazard approach can range from real-time forecasting of lahar activity to calibration of physical or statistical models (e.g. emulators) for long-term volcanic hazard assessment.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
NASA Astrophysics Data System (ADS)
Blahut, J.; Horton, P.; Sterlacchini, S.; Jaboyedoff, M.
2010-11-01
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
Proportional Reasoning as Essential Numeracy
ERIC Educational Resources Information Center
Dole, Shelley; Hilton, Annette; Hilton, Geoff
2015-01-01
This paper reports an aspect of a large research and development project that aimed to promote middle years school teachers' understanding and awareness of the pervasiveness of proportional reasoning as integral to numeracy. Teacher survey data of proportional reasoning across the curriculum were mapped on to a rich model of numeracy. Results…
Yan, Fang; Xu, Kaili
2017-01-01
Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific.
Yan, Fang; Xu, Kaili
2017-01-01
Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific. PMID:28076440
A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards
NASA Astrophysics Data System (ADS)
Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.
Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show
Stirling, M.; Petersen, M.
2006-01-01
We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.
[Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].
Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang
2014-02-01
In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and
NASA Astrophysics Data System (ADS)
Loughlin, Susan
2013-04-01
GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.
Snakes as hazards: modelling risk by chasing chimpanzees.
McGrew, William C
2015-04-01
Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.
2012-01-01
In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-02-19
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd.
LNG fires: a review of experimental results, models and hazard prediction challenges.
Raj, Phani K
2007-02-20
A number of experimental investigations of LNG fires (of sizes 35 m diameter and smaller) were undertaken, world wide, during the 1970s and 1980s to study their physical and radiative characteristics. This paper reviews the published data from several of these tests including from the largest test to date, the 35 m, Montoir tests. Also reviewed in this paper is the state of the art in modeling LNG pool and vapor fires, including thermal radiation hazard modeling. The review is limited to considering the integral and semi-empirical models (solid flame and point source); CFD models are not reviewed. Several aspects of modeling LNG fires are reviewed including, the physical characteristics, such as the (visible) fire size and shape, tilt and drag in windy conditions, smoke production, radiant thermal output, etc., and the consideration of experimental data in the models. Comparisons of model results with experimental data are indicated and current deficiencies in modeling are discussed. The requirements in the US and European regulations related to LNG fire hazard assessment are reviewed, in brief, in the light of model inaccuracies, criteria for hazards to people and structures, and the effects of mitigating circumstances. The paper identifies: (i) critical parameters for which there exist no data, (ii) uncertainties and unknowns in modeling and (iii) deficiencies and gaps in current regulatory recipes for predicting hazards.
The influence of hazard models on GIS-based regional risk assessments and mitigation policies
Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.
2006-01-01
Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America
NASA Astrophysics Data System (ADS)
Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.
2014-12-01
Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the
Modelling tropical cyclone hazards under climate change scenario using geospatial techniques
NASA Astrophysics Data System (ADS)
Hoque, M. A.; Phinn, S.; Roelfsema, C.; Childs, I.
2016-11-01
Tropical cyclones are a common and devastating natural disaster in many coastal areas of the world. As the intensity and frequency of cyclones will increase under the most likely future climate change scenarios, appropriate approaches at local scales (1-5 km) are essential for producing sufficiently detailed hazard models. These models are used to develop mitigation plans and strategies for reducing the impacts of cyclones. This study developed and tested a hazard modelling approach for cyclone impacts in Sarankhola upazila, a 151 km2 local government area in coastal Bangladesh. The study integrated remote sensing, spatial analysis and field data to model cyclone generated hazards under a climate change scenario at local scales covering < 1000 km2. A storm surge model integrating historical cyclone data and Digital Elevation Model (DEM) was used to generate the cyclone hazard maps for different cyclone return periods. Frequency analysis was carried out using historical cyclone data (1960--2015) to calculate the storm surge heights of 5, 10, 20, 50 and 100 year return periods of cyclones. Local sea level rise scenario of 0.34 m for the year 2050 was simulated with 20 and 50 years return periods. Our results showed that cyclone affected areas increased with the increase of return periods. Around 63% of study area was located in the moderate to very high hazard zones for 50 year return period, while it was 70% for 100 year return period. The climate change scenarios increased the cyclone impact area by 6-10 % in every return period. Our findings indicate this approach has potential to model the cyclone hazards for developing mitigation plans and strategies to reduce the future impacts of cyclones.
Three multimedia models used at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.
1996-02-01
Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment
Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank
2008-11-01
Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application
Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.
2015-01-01
The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after
A Critical Technical Review of Six Hazard Assessment Models
1975-12-01
has discharged. Although the model determines the complete time- historyof venting, the important outputs of the model are (1) the total venting time...remains above the water level; and second, that the height of the hole [3] Short, B. E., H. L. Kent, Jr., and B. F. Treat, Engineering Thermo- dynamics, pp...common in engineering analysis that it was assumed that its listing as an assumption was a misstatement and the statement was interpreted as meaning
Medical Modeling of Particle Size Effects for CB Inhalation Hazards
2015-09-01
typical city. As has been described , many of the parameters in the model are hard-coded due to limitations in data transfer with SCIPUFF. When fully... describes the resulting medical impact. Many current models assume that only the 1 to 5 micron “respirable” particles capable of reaching the pulmonary...well. Inhalation mechanics , FXCODA, DARRT, bioagent, aerosol, particle size, particle deposition, biological agents, ricin, tularemia Unclassified
NASA Astrophysics Data System (ADS)
Lu, X.; Gridin, S.; Williams, R. T.; Mayhugh, M. R.; Gektin, A.; Syntfeld-Kazuch, A.; Swiderski, L.; Moszynski, M.
2017-01-01
Relatively recent experiments on the scintillation response of CsI:Tl have found that there are three main decay times of about 730 ns, 3 μ s , and 16 μ s , i.e., one more principal decay component than had been previously reported; that the pulse shape depends on gamma-ray energy; and that the proportionality curves of each decay component are different, with the energy-dependent light yield of the 16 -μ s component appearing to be anticorrelated with that of the 0.73 -μ s component at room temperature. These observations can be explained by the described model of carrier transport and recombination in a particle track. This model takes into account processes of hot and thermalized carrier diffusion, electric-field transport, trapping, nonlinear quenching, and radiative recombination. With one parameter set, the model reproduces multiple observables of CsI:Tl scintillation response, including the pulse shape with rise and three decay components, its energy dependence, the approximate proportionality, and the main trends in proportionality of different decay components. The model offers insights on the spatial and temporal distributions of carriers and their reactions in the track.
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)
2001-01-01
Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated
Coincidence Proportional Counter
Manley, J H
1950-11-21
A coincidence proportional counter having a plurality of collecting electrodes so disposed as to measure the range or energy spectrum of an ionizing particle-emitting source such as an alpha source, is disclosed.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Forecasting Marine Corps Enlisted Attrition Through Parametric Modeling
2009-03-01
OF PAGES 85 14. SUBJECT TERMS Forecasting, Attrition, Marine Corps NEAS losses, Gompertz Model, Survival Analysis 16. PRICE CODE 17. SECURITY...18 1. Parametric Proportional Hazards Models ......................................18 2. Gompertz Models...19 a. Gompertz Hazard Function....................................................19 b. Gompertz Cumulative
Measurements and Models for Hazardous chemical and Mixed Wastes
Laurel A. Watts; Cynthia D. Holcomb; Stephanie L. Outcalt; Beverly Louie; Michael E. Mullins; Tony N. Rogers
2002-08-21
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the DOE sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system o water + acetone + 2-propanol + NaNo3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Modeling geomagnetic induction hazards using a 3-D electrical conductivity model of Australia
NASA Astrophysics Data System (ADS)
Wang, Liejun; Lewis, Andrew M.; Ogawa, Yasuo; Jones, William V.; Costelloe, Marina T.
2016-12-01
The surface electric field induced by external geomagnetic source fields is modeled for a continental-scale 3-D electrical conductivity model of Australia at periods of a few minutes to a few hours. The amplitude and orientation of the induced electric field at periods of 360 s and 1800 s are presented and compared to those derived from a simplified ocean-continent (OC) electrical conductivity model. It is found that the induced electric field in the Australian region is distorted by the heterogeneous continental electrical conductivity structures and surrounding oceans. On the northern coastlines, the induced electric field is decreased relative to the simple OC model due to a reduced conductivity contrast between the seas and the enhanced conductivity structures inland. In central Australia, the induced electric field is less distorted with respect to the OC model as the location is remote from the oceans, but inland crustal high-conductivity anomalies are the major source of distortion of the induced electric field. In the west of the continent, the lower conductivity of the Western Australia Craton increases the conductivity contrast between the deeper oceans and land and significantly enhances the induced electric field. Generally, the induced electric field in southern Australia, south of latitude -20°, is higher compared to northern Australia. This paper provides a regional indicator of geomagnetic induction hazards across Australia.
NASA Astrophysics Data System (ADS)
Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.
2015-12-01
Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
NASA Astrophysics Data System (ADS)
Gochis, E. E.; Lechner, H. N.; Brill, K. A.; Lerner, G.; Ramos, E.
2014-12-01
Graduate students at Michigan Technological University developed the "Landslides!" activity to engage middle & high school students participating in summer engineering programs in a hands-on exploration of geologic engineering and STEM (Science, Technology, Engineering and Math) principles. The inquiry-based lesson plan is aligned to Next Generation Science Standards and is appropriate for 6th-12th grade classrooms. During the activity students focus on the factors contributing to landslide development and engineering practices used to mitigate hazards of slope stability hazards. Students begin by comparing different soil types and by developing predictions of how sediment type may contribute to differences in slope stability. Working in groups, students then build tabletop hill-slope models from the various materials in order to engage in evidence-based reasoning and test their predictions by adding groundwater until each group's modeled slope fails. Lastly students elaborate on their understanding of landslides by designing 'engineering solutions' to mitigate the hazards observed in each model. Post-evaluations from students demonstrate that they enjoyed the hands-on nature of the activity and the application of engineering principles to mitigate a modeled natural hazard.
DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS
The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...
Modeling downwind hazards after an accidental release of chlorine trifluoride
Lombardi, D.A.; Cheng, Meng-Dawn
1996-05-01
A module simulating ClF{sub 3} chemical reactions with water vapor and thermodynamic processes in the atmosphere after an accidental release has been developed. This module was liked to the HGSYSTEM. Initial model runs simulate the rapid formation of HF and ClO{sub 2} after an atmospheric release of ClF{sub 3}. At distances beyond the first several meters from the release point, HF and ClO{sub 2} concentrations pose a greater threat to human health than do ClF{sub 3} concentrations. For most of the simulations, ClF{sub 3} concentrations rapidly fall below the IDLH. Fro releases occurring in ambient conditions with low relative humidity and/or ambient temperature, ClF{sub 3} concentrations exceed the IDLH up to almost 500 m. The performance of this model needs to be determined for potential release scenarios that will be considered. These release scenarios are currently being developed.
Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments
2011-05-03
Landforms, 32, 1913-1927, 2007. Cook, J.P., and J.D. Pelletier, Relief threshold for eolian transport across alluvial fans , Journal of Geophysical...Research, 112, F02026, doi:10.1029/2006JF000610, 2007. Pelletier, J.D., A Cantor set model of eolian dust accumulation on desert alluvial fan terraces...playas and dust deposition on alluvial fans . Finally, the project made important progress in our understanding of eolian bedforms, including what
Measurement and Model for Hazardous Chemical and Mixed Waste
Michael E. Mullins; Tony N. Rogers; Stephanie L. Outcalt; Beverly Louie; Laurel A. Watts; Cynthia D. Holcomb
2002-07-30
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the Department of Energy (DOE) sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system of water + acetone + 2-propanol + NaNO3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Large area application of a corn hazard model. [Soviet Union
NASA Technical Reports Server (NTRS)
Ashburn, P.; Taylor, T. W. (Principal Investigator)
1981-01-01
An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.
NASA Astrophysics Data System (ADS)
Nagaoka, Tomoaki; Kunieda, Etsuo; Watanabe, Soichi
2008-12-01
The development of high-resolution anatomical voxel models of children is difficult given, inter alia, the ethical limitations on subjecting children to medical imaging. We instead used an existing voxel model of a Japanese adult and three-dimensional deformation to develop three voxel models that match the average body proportions of Japanese children at 3, 5 and 7 years old. The adult model was deformed to match the proportions of a child by using the measured dimensions of various body parts of children at 3, 5 and 7 years old and a free-form deformation technique. The three developed models represent average-size Japanese children of the respective ages. They consist of cubic voxels (2 mm on each side) and are segmented into 51 tissues and organs. We calculated the whole-body-averaged specific absorption rates (WBA-SARs) and tissue-averaged SARs for the child models for exposures to plane waves from 30 MHz to 3 GHz; these results were then compared with those for scaled down adult models. We also determined the incident electric-field strength required to produce the exposure equivalent to the ICNIRP basic restriction for general public exposure, i.e., a WBA-SAR of 0.08 W kg-1.
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
NASA Astrophysics Data System (ADS)
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
NASA Astrophysics Data System (ADS)
Xiong, Liyang; Shi, Wenjia; Tang, Chao
2016-08-01
Adaptation is a ubiquitous feature in biological sensory and signaling networks. It has been suggested that adaptive systems may follow certain simple design principles across diverse organisms, cells and pathways. One class of networks that can achieve adaptation utilizes an incoherent feedforward control, in which two parallel signaling branches exert opposite but proportional effects on the output at steady state. In this paper, we generalize this adaptation mechanism by establishing a steady-state proportionality relationship among a subset of nodes in a network. Adaptation can be achieved by using any two nodes in the sub-network to respectively regulate the output node positively and negatively. We focus on enzyme networks and first identify basic regulation motifs consisting of two and three nodes that can be used to build small networks with proportional relationships. Larger proportional networks can then be constructed modularly similar to LEGOs. Our method provides a general framework to construct and analyze a class of proportional and/or adaptation networks with arbitrary size, flexibility and versatile functional features.
How new fault data and models affect seismic hazard results? Examples from southeast Spain
NASA Astrophysics Data System (ADS)
Gaspar-Escribano, Jorge M.; Belén Benito, M.; Staller, Alejandra; Ruiz Barajas, Sandra; Quirós, Ligia E.
2016-04-01
In this work, we study the impact of different approaches to incorporate faults in a seismic hazard assessment analysis. Firstly, we consider two different methods to distribute the seismicity of the study area into faults and area-sources, based on magnitude partitioning and on moment rate distribution. We use two recurrence models to characterize fault activity: the characteristic earthquake model and the modified Gutenberg-Richter exponential frequency-magnitude distribution. An application of the work is developed in the region of Murcia (southeastern Spain), due to the availability of fault data and because is one of the areas in Spain with higher seismic hazard. The parameters used to model fault sources are derived from paleoseismological and field studies obtained from the literature and online repositories. Additionally, for some significant faults only, geodetically-derived slip rates are used to compute recurrence periods. The results of all the seismic hazard computations carried out using different models and data are represented in maps of expected peak ground accelerations for a return period of 475 years. Maps of coefficients of variation are presented to constraint the variability of the end-results to different input models and values. Additionally, the different hazard maps obtained in this study are compared with the seismic hazard maps obtained in previous work for the entire Spanish territory and more specifically for the region of Murcia. This work is developed in the context of the MERISUR project (ref. CGL2013-40492-R), with funding from the Spanish Ministry of Economy and Competitiveness.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.
2015-12-01
We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.
Hofman, Abe D; Visser, Ingmar; Jansen, Brenda R J; van der Maas, Han L J
2015-01-01
We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards
Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J
2007-11-26
This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.
Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C
2015-09-08
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food
Mura, Ivan; Malakar, Pradeep K.; Walshaw, John; Peck, Michael W.; Barker, G. C.
2015-01-01
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events.
A probabilistic tornado wind hazard model for the continental United States
Hossain, Q; Kimball, J; Mensing, R; Savy, J
1999-04-19
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.
Selecting Proportional Reasoning Tasks
ERIC Educational Resources Information Center
de la Cruz, Jessica A.
2013-01-01
With careful consideration given to task selection, students can construct their own solution strategies to solve complex proportional reasoning tasks while the teacher's instructional goals are still met. Several aspects of the tasks should be considered including their numerical structure, context, difficulty level, and the strategies they are…
Cognitive and Metacognitive Aspects of Proportional Reasoning
ERIC Educational Resources Information Center
Modestou, Modestina; Gagatsis, Athanasios
2010-01-01
In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…
1993-03-01
consistent with the initial concentration at the source GPN INPUT DATA FOR Desert Tortoise CHEDIC&L RELEASED Anhydrous Amonia TRIAL : OTi 0T2 Dŗ DT4...phosgene (COC12 ), anhydrous ammonia (NH3 ), chlorine (Cl 2), sulfur dioxide (S02), hydrogen sulfide (H 2S), fluorine (F ), and hydrogen fluoride (HF...D.N., Yohn, J.F., Koopman R.P. and Brown T.C., "Conduct of Anhydrous Hydrofluoric Acid Spill Experiments," Proc. Int. Cqnf. On Vapor Cloud Modeling
Critical load analysis in hazard assessment of metals using a Unit World Model.
Gandhi, Nilima; Bhavsar, Satyendra P; Diamond, Miriam L
2011-09-01
A Unit World approach has been used extensively to rank chemicals for their hazards and to understand differences in chemical behavior. Whereas the fate and effects of an organic chemical in a Unit World Model (UWM) analysis vary systematically according to one variable (fraction of organic carbon), and the chemicals have a singular ranking regardless of environmental characteristics, metals can change their hazard ranking according to freshwater chemistry, notably pH and dissolved organic carbon (DOC). Consequently, developing a UWM approach for metals requires selecting a series of representative freshwater chemistries, based on an understanding of the sensitivity of model results to this chemistry. Here we analyze results from a UWM for metals with the goal of informing the selection of appropriate freshwater chemistries for a UWM. The UWM loosely couples the biotic ligand model (BLM) to a geochemical speciation model (Windermere Humic Adsorption Model [WHAM]) and then to the multi-species fate transport-speciation (Transpec) model. The UWM is applied to estimate the critical load (CL) of cationic metals Cd, Cu, Ni, Pb, and Zn, using three lake chemistries that vary in trophic status, pH, and other parameters. The model results indicated a difference of four orders of magnitude in particle-to-total dissolved partitioning (K(d)) that translated into minimal differences in fate because of the short water residence time used. However, a maximum 300-fold difference was calculated in Cu toxicity among the three chemistries and three aquatic organisms. Critical loads were lowest (greatest hazard) in the oligotrophic water chemistry and highest (least hazard) in the eutrophic water chemistry, despite the highest fraction of free metal ion as a function of total metal occurring in the mesotrophic system, where toxicity was ameliorated by competing cations. Water hardness, DOC, and pH had the greatest influence on CL, because of the influence of these factors on aquatic
NASA Astrophysics Data System (ADS)
Komjathy, Attila; Yang, Yu-Ming; Meng, Xing; Verkhoglyadova, Olga; Mannucci, Anthony J.; Langley, Richard B.
2016-07-01
Natural hazards including earthquakes, volcanic eruptions, and tsunamis have been significant threats to humans throughout recorded history. Global navigation satellite systems (GNSS; including the Global Positioning System (GPS)) receivers have become primary sensors to measure signatures associated with natural hazards. These signatures typically include GPS-derived seismic deformation measurements, coseismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure, model, and monitor postseismic ionospheric disturbances caused by, e.g., earthquakes, volcanic eruptions, and tsunamis. In this paper, we review research progress at the Jet Propulsion Laboratory and elsewhere using examples of ground-based and spaceborne observation of natural hazards that generated TEC perturbations. We present results for state-of-the-art imaging using ground-based and spaceborne ionospheric measurements and coupled atmosphere-ionosphere modeling of ionospheric TEC perturbations. We also report advancements and chart future directions in modeling and inversion techniques to estimate tsunami wave heights and ground surface displacements using TEC measurements and error estimates. Our initial retrievals strongly suggest that both ground-based and spaceborne GPS remote sensing techniques could play a critical role in detection and imaging of the upper atmosphere signatures of natural hazards including earthquakes and tsunamis. We found that combining ground-based and spaceborne measurements may be crucial in estimating critical geophysical parameters such as tsunami wave heights and ground surface displacements using TEC observations. The GNSS-based remote sensing of natural-hazard-induced ionospheric disturbances could be applied to and used in operational tsunami and earthquake early warning systems.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically
Ndeffo-Mbah, Martial L; Takougang, Innocent; Ukety, Tony; Wanji, Samuel; Galvani, Alison P; Diggle, Peter J
2016-01-01
Lymphatic Filariasis and Onchocerciasis (river blindness) constitute pressing public health issues in tropical regions. Global elimination programs, involving mass drug administration (MDA), have been launched by the World Health Organisation. Although the drugs used are generally well tolerated, individuals who are highly co-infected with Loa loa are at risk of experiencing serious adverse events. Highly infected individuals are more likely to be found in communities with high prevalence. An understanding of the relationship between individual infection and population-level prevalence can therefore inform decisions on whether MDA can be safely administered in an endemic community. Based on Loa loa infection intensity data from individuals in Cameroon, the Republic of the Congo and the Democratic Republic of the Congo we develop a statistical model for the distribution of infection levels in communities. We then use this model to make predictive inferences regarding the proportion of individuals whose parasite count exceeds policy-relevant levels. In particular we show how to exploit the positive correlation between community-level prevalence and intensity of infection in order to predict the proportion of highly infected individuals in a community given only prevalence data from the community in question. The resulting prediction intervals are not substantially wider, and in some cases narrower, than the corresponding binomial confidence intervals obtained from data that include measurements of individual infection levels. Therefore the model developed here facilitates the estimation of the proportion of individuals highly infected with Loa loa using only estimated community level prevalence. It can be used to assess the risk of rolling out MDA in a specific community, or to guide policy decisions. PMID:27906982
Schlüter, Daniela K; Ndeffo-Mbah, Martial L; Takougang, Innocent; Ukety, Tony; Wanji, Samuel; Galvani, Alison P; Diggle, Peter J
2016-12-01
Lymphatic Filariasis and Onchocerciasis (river blindness) constitute pressing public health issues in tropical regions. Global elimination programs, involving mass drug administration (MDA), have been launched by the World Health Organisation. Although the drugs used are generally well tolerated, individuals who are highly co-infected with Loa loa are at risk of experiencing serious adverse events. Highly infected individuals are more likely to be found in communities with high prevalence. An understanding of the relationship between individual infection and population-level prevalence can therefore inform decisions on whether MDA can be safely administered in an endemic community. Based on Loa loa infection intensity data from individuals in Cameroon, the Republic of the Congo and the Democratic Republic of the Congo we develop a statistical model for the distribution of infection levels in communities. We then use this model to make predictive inferences regarding the proportion of individuals whose parasite count exceeds policy-relevant levels. In particular we show how to exploit the positive correlation between community-level prevalence and intensity of infection in order to predict the proportion of highly infected individuals in a community given only prevalence data from the community in question. The resulting prediction intervals are not substantially wider, and in some cases narrower, than the corresponding binomial confidence intervals obtained from data that include measurements of individual infection levels. Therefore the model developed here facilitates the estimation of the proportion of individuals highly infected with Loa loa using only estimated community level prevalence. It can be used to assess the risk of rolling out MDA in a specific community, or to guide policy decisions.
Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.
2014-01-01
Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a
Multiwire proportional chamber development
NASA Technical Reports Server (NTRS)
Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.
1973-01-01
The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.
Data Model for Multi Hazard Risk Assessment Spatial Support Decision System
NASA Astrophysics Data System (ADS)
Andrejchenko, Vera; Bakker, Wim; van Westen, Cees
2014-05-01
The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
A probabilistic seismic hazard model based on cellular automata and information theory
NASA Astrophysics Data System (ADS)
Jiménez, A.; Posadas, A. M.; Marfil, J. M.
2005-03-01
We try to obtain a spatio-temporal model of earthquakes occurrence based on Information Theory and Cellular Automata (CA). The CA supply useful models for many investigations in natural sciences; here, it have been used to establish temporal relations between the seismic events occurring in neighbouring parts of the crust. The catalogue used is divided into time intervals and the region into cells, which are declared active or inactive by means of a certain energy release criterion (four criteria have been tested). A pattern of active and inactive cells which evolves over time is given. A stochastic CA is constructed with the patterns to simulate their spatio-temporal evolution. The interaction between the cells is represented by the neighbourhood (2-D and 3-D models have been tried). The best model is chosen by maximizing the mutual information between the past and the future states. Finally, a Probabilistic Seismic Hazard Map is drawn up for the different energy releases. The method has been applied to the Iberian Peninsula catalogue from 1970 to 2001. For 2-D, the best neighbourhood has been the Moore's one of radius 1; the von Neumann's 3-D also gives hazard maps and takes into account the depth of the events. Gutenberg-Richter's law and Hurst's analysis have been obtained for the data as a test of the catalogue. Our results are consistent with previous studies both of seismic hazard and stress conditions in the zone, and with the seismicity occurred after 2001.
Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges
2016-04-01
The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model
Using the Averaging-Based Factorization to Assess CyberShake Hazard Models
NASA Astrophysics Data System (ADS)
Wang, F.; Jordan, T. H.; Callaghan, S.; Graves, R. W.; Olsen, K. B.; Maechling, P. J.
2013-12-01
The CyberShake project of Southern California Earthquake Center (SCEC) combines stochastic models of finite-fault ruptures with 3D ground motion simulations to compute seismic hazards at low frequencies (< 0.5 Hz) in Southern California. The first CyberShake hazard model (Graves et al., 2011) was based on the Graves & Pitarka (2004) rupture model (GP-04) and the Kohler et al. (2004) community velocity model (CVM-S). We have recently extended the CyberShake calculations to include the Graves & Pitarka (2010) rupture model (GP-10), which substantially increases the rupture complexity relative to GP-04, and the Shaw et al. (2011) community velocity model (CVM-H), which features different sedimentary basin structures than CVM-S. Here we apply the averaging-based factorization (ABF) technique of Wang & Jordan (2013) to compare CyberShake models and assess their consistency with the hazards predicted by the Next Generation Attenuation (NGA) models (Power et al., 2008). ABF uses a hierarchical averaging scheme to separate the shaking intensities for large ensembles of earthquakes into relative (dimensionless) excitation fields representing site, path, directivity, and source-complexity effects, and it provides quantitative, map-based comparisons between models with completely different formulations. The CyberShake directivity effects are generally larger than predicted by the Spudich & Chiou (2008) NGA directivity factor, but those calculated from the GP-10 sources are smaller than those of GP-04, owing to the greater incoherence of the wavefields from the more complex rupture models. Substituting GP-10 for GP-04 reduces the CyberShake-NGA directivity-effect discrepancy by a factor of two, from +36% to +18%. The CyberShake basin effects are generally larger than those from the three NGA models that provide basin-effect factors. However, the basin excitations calculated from CVM-H are smaller than from CVM-S, and they show a stronger frequency dependence, primarily because
A new approach for deriving Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Matgen, P.; Hostache, R.; Chini, M.; Giustarini, L.; Pappenberger, F.; Bally, P.
2014-12-01
With the flood consequences likely to amplify because of the growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method that integrates global flood inundation modeling and microwave remote sensing is presented. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers opportunities for estimating flood non-exceedance probabilities in a robust way. These probabilities can be attributed to historical satellite observations. Time series of SAR-derived flood extent maps and associated non-exceedance probabilities can then be combined generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. In principle, this can be done for any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. As a test case we applied the method on the Severn River (UK) and the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. The first results confirm the potentiality of the method. However, further developments on two aspects are required to improve the quality of the hazard map and to ensure the acceptability of the product by potential end user organizations. On the one hand, it is of paramount importance to
Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection
NASA Technical Reports Server (NTRS)
Messing, B. S.
1980-01-01
A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.
A seismic source zone model for the seismic hazard assessment of Slovakia
NASA Astrophysics Data System (ADS)
Hók, Jozef; Kysel, Robert; Kováč, Michal; Moczo, Peter; Kristek, Jozef; Kristeková, Miriam; Šujan, Martin
2016-06-01
We present a new seismic source zone model for the seismic hazard assessment of Slovakia based on a new seismotectonic model of the territory of Slovakia and adjacent areas. The seismotectonic model has been developed using a new Slovak earthquake catalogue (SLOVEC 2011), successive division of the large-scale geological structures into tectonic regions, seismogeological domains and seismogenic structures. The main criteria for definitions of regions, domains and structures are the age of the last tectonic consolidation of geological structures, thickness of lithosphere, thickness of crust, geothermal conditions, current tectonic regime and seismic activity. The seismic source zones are presented on a 1:1,000,000 scale map.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.
2015-12-01
Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Global Hydrological Hazard Evaluation System (Global BTOP) Using Distributed Hydrological Model
NASA Astrophysics Data System (ADS)
Gusyev, M.; Magome, J.; Hasegawa, A.; Takeuchi, K.
2015-12-01
A global hydrological hazard evaluation system based on the BTOP models (Global BTOP) is introduced and quantifies flood and drought hazards with simulated river discharges globally for historical, near real-time monitoring and climate change impact studies. The BTOP model utilizes a modified topographic index concept and simulates rainfall-runoff processes including snowmelt, overland flow, soil moisture in the root and unsaturated zones, sub-surface flow, and river flow routing. The current global BTOP is constructed from global data on 10-min grid and is available to conduct river basin analysis on local, regional, and global scale. To reduce the impact of a coarse resolution, topographical features of global BTOP were obtained using river network upscaling algorithm that preserves fine resolution characteristics of 3-arcsec HydroSHEDS and 30-arcsec Hydro1K datasets. In addition, GLCC-IGBP land cover (USGS) and the DSMW(FAO) were used for the root zone depth and soil properties, respectively. The long-term seasonal potential evapotranspiration within BTOP model was estimated by the Shuttleworth-Wallace model using climate forcing data CRU TS3.1 and a GIMMS-NDVI(UMD/GLCF). The global BTOP was run with globally available precipitation such APHRODITE dataset and showed a good statistical performance compared to the global and local river discharge data in the major river basins. From these simulated daily river discharges at each grid, the flood peak discharges of selected return periods were obtained using the Gumbel distribution with L-moments and the hydrological drought hazard was quantified using standardized runoff index (SRI). For the dynamic (near real-time) applications, the global BTOP model is run with GSMaP-NRT global precipitation and simulated daily river discharges are utilized in a prototype near-real time discharge simulation system (GFAS-Streamflow), which is used to issue flood peak discharge alerts globally. The global BTOP system and GFAS
Proportional counter radiation camera
Borkowski, C.J.; Kopp, M.K.
1974-01-15
A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
NASA Astrophysics Data System (ADS)
Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark
2015-12-01
Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.
Beyond Flood Hazard Maps: Detailed Flood Characterization with Remote Sensing, GIS and 2d Modelling
NASA Astrophysics Data System (ADS)
Santillan, J. R.; Marqueso, J. T.; Makinano-Santillan, M.; Serviano, J. L.
2016-09-01
Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS) and Geographic Information System (GIS) are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D) flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers) that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.
NASA Astrophysics Data System (ADS)
Wu, Qing
Millions of people across the world are suffering from noise induced hearing loss (NIHL), especially under working conditions of either continuous Gaussian or non-Gaussian noise that might affect human's hearing function. Impulse noise is a typical non-Gaussian noise exposure in military and industry, and generates severe hearing loss problem. This study mainly focuses on characterization of impulse noise using digital signal analysis method and prediction of the auditory hazard of impulse noise induced hearing loss by the Auditory Hazard Assessment Algorithm for Humans (AHAAH) modeling. A digital noise exposure system has been developed to produce impulse noises with peak sound pressure level (SPL) up to 160 dB. The characterization of impulse noise generated by the system has been investigated and analyzed in both time and frequency domains. Furthermore, the effects of key parameters of impulse noise on auditory risk unit (ARU) are investigated using both simulated and experimental measured impulse noise signals in the AHAAH model. The results showed that the ARUs increased monotonically with the peak pressure (both P+ and P-) increasing. With increasing of the time duration, the ARUs increased first and then decreased, and the peak of ARUs appeared at about t = 0.2 ms (for both t+ and t-). In addition, the auditory hazard of experimental measured impulse noises signals demonstrated a monotonically increasing relationship between ARUs and system voltages.
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Nicknam, Ahmad; Dadras, Ehsan Yousefi; Eftekhari, Seyed Nasrollah
2017-01-01
Ground motions are affected by directivity effects at near-fault regions which result in low-frequency cycle pulses at the beginning of the velocity time history. The directivity features of near-fault ground motions can lead to significant increase in the risk of earthquake-induced damage on engineering structures. The ordinary probabilistic seismic hazard analysis (PSHA) does not take into account such effects; recent studies have thus proposed new frameworks to incorporate directivity effects in PSHA. The objective of this study is to develop the seismic hazard mapping of Tehran City according to near-fault PSHA procedure for different return periods. To this end, the directivity models required in the modified PSHA were developed based on a database of the simulated ground motions. The simulated database was used in this study because there are no recorded near-fault data in the region to derive purely empirically based pulse prediction models. The results show that the directivity effects can significantly affect the estimate of regional seismic hazard.
Assessing rainfall triggered landslide hazards through physically based models under uncertainty
NASA Astrophysics Data System (ADS)
Balin, D.; Metzger, R.; Fallot, J. M.; Reynard, E.
2009-04-01
Hazard and risk assessment require, besides good data, good simulation capabilities to allow prediction of events and their consequences. The present study introduces a landslide hazards assessment strategy based on the coupling of hydrological physically based models with slope stability models that should be able to cope with uncertainty of input data and model parameters. The hydrological model used is based on the Water balance Simulation Model, WASIM-ETH (Schulla et al., 1997), a fully distributed hydrological model that has been successfully used previously in the alpine regions to simulate runoff, snowmelt, glacier melt, and soil erosion and impact of climate change on these. The study region is the Vallon de Nant catchment (10km2) in the Swiss Alps. A sound sensitivity analysis will be conducted in order to choose the discretization threshold derived from a Laser DEM model, to which the hydrological model yields the best compromise between performance and time computation. The hydrological model will be further coupled with slope stability methods (that use the topographic index and the soil moisture such as derived from the hydrological model) to simulate the spatial distribution of the initiation areas of different geomorphic processes such as debris flows and rainfall triggered landslides. To calibrate the WASIM-ETH model, the Monte Carlo Markov Chain Bayesian approach is privileged (Balin, 2004, Schaefli et al., 2006). The model is used in a single and a multi-objective frame to simulate discharge and soil moisture with uncertainty at representative locations. This information is further used to assess the potential initial areas for rainfall triggered landslides and to study the impact of uncertain input data, model parameters and simulated responses (discharge and soil moisture) on the modelling of geomorphological processes.
Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise
NASA Astrophysics Data System (ADS)
Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.
2014-12-01
A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.
Schunior, A.; Zengel, A.E.; Mullenix, P.J.; Tarbell, N.J.; Howes, A.; Tassinari, M.S. )
1990-10-15
Many long term survivors of childhood acute lymphoblastic leukemia have short stature, as well as craniofacial and dental abnormalities, as side effects of central nervous system prophylactic therapy. An animal model is presented to assess these adverse effects on growth. Cranial irradiation (1000 cGy) with and without prednisolone (18 mg/kg i.p.) and methotrexate (2 mg/kg i.p.) was administered to 17- and 18-day-old Sprague-Dawley male and female rats. Animals were weighed 3 times/week. Final body weight and body length were measured at 150 days of age. Femur length and craniofacial dimensions were measured directly from the bones, using calipers. For all exposed groups there was a permanent suppression of weight gain with no catch-up growth or normal adolescent growth spurt. Body length was reduced for all treated groups, as were the ratios of body weight to body length and cranial length to body length. Animals subjected to cranial irradiation exhibited microcephaly, whereas those who received a combination of radiation and chemotherapy demonstrated altered craniofacial proportions in addition to microcephaly. Changes in growth patterns and skeletal proportions exhibited sexually dimorphic characteristics. The results indicate that cranial irradiation is a major factor in the growth failure in exposed rats, but chemotherapeutic agents contribute significantly to the outcome of growth and craniofacial dimensions.
Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.
1994-07-01
This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
Nowcast model for hazardous material spill prevention and response, San Francisco Bay, California
Cheng, Ralph T.; Wilmot, Wayne L.; Galt, Jerry A.
1997-01-01
The National Oceanic and Atmospheric Administration (NOAA) installed the Physical Oceanographic Real-time System (PORTS) in San Francisco Bay, California, to provide real-time observations of tides, tidal currents, and meteorological conditions to, among other purposes, guide hazardous material spill prevention and response. Integrated with nowcast modeling techniques and dissemination of real-time data and the nowcasting results through the Internet on the World Wide Web, emerging technologies used in PORTS for real-time data collection forms a nowcast modeling system. Users can download tides and tidal current distribution in San Francisco Bay for their specific applications and/or for further analysis.
NASA Technical Reports Server (NTRS)
Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.
1973-01-01
The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.
The microstrip proportional counter
NASA Technical Reports Server (NTRS)
Ramsey, B. D.
1992-01-01
Microstrip detectors in which the usual discrete anode and cathode wires are replaced by conducting strips on an insulating or partially insulating substrate are fabricated using integrated circuit-type photolithographic techniques and hence offer very high spatial accuracy and uniformity, together with the capability of producing extremely fine electrode structures. Microstrip proportional counters have now been variously reported having an energy resolution of better than 11 percent FWHM at 5.9 keV. They have been fabricated with anode bars down to 2 microns and on a variety of substrate materials including thin films which can be molded to different shapes. This review will examine the development of the microstrip detector with emphasis on the qualities which make this detector particularly interesting for use in astronomy.
Gated strip proportional detector
Morris, Christopher L.; Idzorek, George C.; Atencio, Leroy G.
1987-01-01
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10.sup.6. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
Gated strip proportional detector
Morris, C.L.; Idzorek, G.C.; Atencio, L.G.
1985-02-19
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10/sup 6/. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
Earthquake catalogs for the 2017 Central and Eastern U.S. short-term seismic hazard model
Mueller, Charles S.
2017-01-01
The U. S. Geological Survey (USGS) makes long-term seismic hazard forecasts that are used in building codes. The hazard models usually consider only natural seismicity; non-tectonic (man-made) earthquakes are excluded because they are transitory or too small. In the past decade, however, thousands of earthquakes related to underground fluid injection have occurred in the central and eastern U.S. (CEUS), and some have caused damage. In response, the USGS is now also making short-term forecasts that account for the hazard from these induced earthquakes. Seismicity statistics are analyzed to develop recurrence models, accounting for catalog completeness. In the USGS hazard modeling methodology, earthquakes are counted on a map grid, recurrence models are applied to estimate the rates of future earthquakes in each grid cell, and these rates are combined with maximum-magnitude models and ground-motion models to compute the hazard The USGS published a forecast for the years 2016 and 2017.Here, we document the development of the seismicity catalogs for the 2017 CEUS short-term hazard model. A uniform earthquake catalog is assembled by combining and winnowing pre-existing source catalogs. The initial, final, and supporting earthquake catalogs are made available here.
NASA Astrophysics Data System (ADS)
Aronica, Giuseppe T.; Cascone, Ernesto; Randazzo, Giovanni; Biondi, Giovanni; Lanza, Stefania; Fraccarollo, Luigi; Brigandi, Giuseppina
2010-05-01
Catastrophic events periodically occur in the area of Messina (Sicily, Italy). Both in October 2007 and in October 2009 debris/mud flows triggered by heavy rainfall affected various towns and villages located along the jonian coast of the town, highlighting the destructive potential of these events. The two events gave rise to severe property damage and in the latter more than 40 people were killed. Objective of this study is to present an integrated modelling approach based on three different models, namely an hydrological model, a slope stability model and an hydraulic model, to identify potential debris flow hazard areas. A continuous semi-distributed form of the well known hydrological model IHACRES has been used to derive soil moisture conditions by simulating the infiltration process in Hortonian form. As matter of fact, the soil is conceptually schematized with a catchment storage parameter which represents catchment wetness/soil moisture. The slope stability model allows identifying potential debris-flow sources and is based on the model SHALSTAB that permits to detect those parts of the catchment whose stability conditions are strongly affected by pore water pressure build-up due to local rainfall and soil conductivity and those parts of the basin that, conversely, are unconditionally stable under static loading conditions. Assuming that the solids and the interstitial fluid move downstream with the same velocity, debris flow propagation is described using a two dimensional depth averaged model. Based on extensive sediment sampling and morphological observations, the rheological characterization of the flowing mixture, along with erosion/deposition mechanisms, will be carefully considered in the model. Differential equations are integrated with an implicit Galerkin finite element scheme or, alternatively, finite volume methods. To illustrate this approach, the proposed methodology is applied to a debris flow occurred in the Mastroguglielmo catchment in
Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function
Zheng, Wenjing; Petersen, Maya; van der Laan, Mark
2016-01-01
In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i.e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation (IPTW, van der Laan and Petersen (2007), Robins et al. (2000b), Robins (1999), Robins et al. (2008)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, hence offers bias reduction and efficiency gain over the incumbent IPTW estimator. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a non-targeted substitution estimator in a simulation study. PMID:27227723
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.
Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data
TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal
2016-01-01
Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Brune, J.; Brune, R.; Ludwig, L. Grant; Kendrick, K.; Purvance, M.; Saleeby, I.
2012-04-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use this method to calculate new exposure ages and fragilities for 6 PBRs in southern California (USA) near the San Andreas, San Jacinto, and Elsinore faults at the Lovejoy Buttes, Round Top, Pacifico, Beaumont South, Perris, and Benton Road sites (in addition to the recently published age of 18.7 +/- 2.8 ka for a PBR at the Grass Valley site). We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data.
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Biasi, G. P.; Brune, J. N.; Brune, R.; Grant Ludwig, L.; Kendrick, K. J.; Purvance, M.; Saleeby, I.
2012-12-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional shape model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use our published method to calculate new exposure ages for PBRs at 6 sites in southern California near the San Andreas, San Jacinto, and Elsinore faults, including: Lovejoy Buttes (9 +/- 1 ka), Round Top (35 +/- 1 ka), Pacifico (19 +/- 1 ka, but with a poor fit to data), Beaumont South (17 +/- 2 ka), Perris (24 +/- 2 ka), and Benton Road (40 +/- 1 ka), in addition to the recently published age of 18.5 +/- 2.0 ka for a PBR at the Grass Valley site. We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data. Precariously balanced rock in southern California
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
NASA Astrophysics Data System (ADS)
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
Ao, Di; Song, Rong; Gao, Jin-Wu
2016-06-22
Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.
Marginal regression approach for additive hazards models with clustered current status data.
Su, Pei-Fang; Chi, Yunchan
2014-01-15
Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated.
Atmospheric Electrical Modeling in Support of the NASA F-106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.
1988-01-01
A recently developed storm electrification model (SEM) is used to investigate the operating environment of the F-106 airplane during the NASA Storm Hazards Project. The model is 2-D, time dependent and uses a bulkwater microphysical parameterization scheme. Electric charges and fields are included, and the model is fully coupled dynamically, microphysically and electrically. One flight showed that a high electric field was developed at the aircraft's operating altitude (28 kft) and that a strong electric field would also be found below 20 kft; however, this low-altitude, high-field region was associated with the presence of small hail, posing a hazard to the aircraft. An operational procedure to increase the frequency of low-altitude lightning strikes was suggested. To further the understanding of lightning within the cloud environment, a parameterization of the lightning process was included in the SEM. It accounted for the initiation, propagation, termination, and charge redistribution associated with an intracloud discharge. Finally, a randomized lightning propagation scheme was developed, and the effects of cloud particles on the initiation of lightning investigated.
Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves
2013-01-01
Background Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. Methods A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose–insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70–180 mg/dl). Results Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. Conclusion The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. PMID:23759403
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Lee, Saro; Shattri, Mansor
This paper deals with landslide hazard analysis and cross-application using Geographic Information System (GIS) and remote sensing data for Cameron Highland, Penang Island and Selangor in Malaysia. The aim of this study was to cross-apply and verify a spatial probabilistic model for landslide hazard analysis. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical/geological data and satellite images were collected and processed using GIS and image processing tools. There are ten landslide inducing parameters which are considered for the landslide hazard analysis. These parameters are topographic slope, aspect, curvature and distance from drainage, all derived from the topographic database; geology and distance from lineament, derived from the geologic database; landuse from Landsat satellite images; soil from the soil database; precipitation amount, derived from the rainfall database; and the vegetation index value from SPOT satellite images. These factors were analyzed using an artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide hazard maps were drawn for these three areas using artificial neural network model derived not only from the data for that area but also using the weight for each parameters, one of the statistical model, calculated from each of the other two areas (nine maps in all) as a cross-check of the validity of the method. For verification, the results of the analyses were compared, in each study area, with actual landslide locations. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.
A "mental models" approach to the communication of subsurface hydrology and hazards
NASA Astrophysics Data System (ADS)
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik
2016-11-01
Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R (2) > 0.98), low relative error (<0.1) and Willmott d-index (<0.95). The system could remove more than 97.5 % chemical oxygen demand (COD) from real pharmaceutical wastewater having initial COD value as high as 3500 mg/L while ensuring operation of the forward osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.
NASA Astrophysics Data System (ADS)
Grasso, S.; Maugeri, M.
rigorous complex methods of analysis or qualitative procedures. A semi quantitative procedure based on the definition of the geotechnical hazard index has been applied for the zonation of the seismic geotechnical hazard of the city of Catania. In particular this procedure has been applied to define the influence of geotechnical properties of soil in a central area of the city of Catania, where some historical buildings of great importance are sited. It was also performed an investigation based on the inspection of more than one hundred historical ecclesiastical buildings of great importance, located in the city. Then, in order to identify the amplification effects due to the site conditions, a geotechnical survey form was prepared, to allow a semi quantitative evaluation of the seismic geotechnical hazard for all these historical buildings. In addition, to evaluate the foundation soil time -history response, a 1-D dynamic soil model was employed for all these buildings, considering the non linearity of soil behaviour. Using a GIS, a map of the seismic geotechnical hazard, of the liquefaction hazard and a preliminary map of the seismic hazard for the city of Catania have been obtained. From the analysis of obtained results it may be noticed that high hazard zones are mainly clayey sites
Corrales, Jone; Kristofco, Lauren A; Steele, W Baylor; Saari, Gavin N; Kostal, Jakub; Williams, E Spencer; Mills, Margaret; Gallagher, Evan P; Kavanagh, Terrance J; Simcox, Nancy; Shen, Longzhu Q; Melnikov, Fjodor; Zimmerman, Julie B; Voutchkova-Kostal, Adelina M; Anastas, Paul T; Brooks, Bryan W
2016-11-03
Sustainable molecular design of less hazardous chemicals presents a potentially transformative approach to protect public health and the environment. Relationships between molecular descriptors and toxicity thresholds previously identified the octanol-water distribution coefficient, log D, and the HOMO-LUMO energy gap, ΔE, as two useful properties in the identification of reduced aquatic toxicity. To determine whether these two property-based guidelines are applicable to sublethal oxidative stress (OS) responses, two common aquatic in vivo models, the fathead minnow (Pimephales promelas) and zebrafish (Danio rerio), were employed to examine traditional biochemical biomarkers (lipid peroxidation, DNA damage, and total glutathione) and antioxidant gene activation following exposure to eight structurally diverse industrial chemicals (bisphenol A, cumene hydroperoxide, dinoseb, hydroquinone, indene, perfluorooctanoic acid, R-(-)-carvone, and tert-butyl hydroperoxide). Bisphenol A, cumene hydroperoxide, dinoseb, and hydroquinone were consistent inducers of OS. Glutathione was the most consistently affected biomarker, suggesting its utility as a sensitivity response to support the design of less hazardous chemicals. Antioxidant gene expression (changes in nrf2, gclc, gst, and sod) was most significantly (p < 0.05) altered by R-(-)-carvone, cumene hydroperoxide, and bisphenol A. Results from the present study indicate that metabolism of parent chemicals and the role of their metabolites in molecular initiating events should be considered during the design of less hazardous chemicals. Current empirical and computational findings identify the need for future derivation of sustainable molecular design guidelines for electrophilic reactive chemicals (e.g., SN2 nucleophilic substitution and Michael addition reactivity) to reduce OS related adverse outcomes in vivo.
A computationally efficient 2D hydraulic approach for global flood hazard modeling
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; Sanders, B. F.
2014-12-01
We present a physically-based flood hazard model that incorporates two main components: a hydrologic model and a hydraulic model. For hydrology we use TOPNET, a more comprehensive version of the original TOPMODEL. To simulate flood propagation, we use a 2D Godunov-type finite volume shallow water model. Physically-based global flood hazard simulation poses enormous computational challenges stemming from the increasingly fine resolution of available topographic data which represents the key input. Parallel computing helps to distribute the computational cost, but the computationally-intensive hydraulic model must be made far faster and agile for global-scale feasibility. Here we present a novel technique for hydraulic modeling whereby the computational grid is much coarser (e.g., 5-50 times) than the available topographic data, but the coarse grid retains the storage and conveyance (cross-sectional area) of the fine resolution data. This allows the 2D hydraulic model to be run on extremely large domains (e.g. thousands km2) with a single computational processor, and opens the door to global coverage with parallel computing. The model also downscales the coarse grid results onto the high-resolution topographic data to produce fine-scale predictions of flood depths and velocities. The model achieves computational speeds typical of very coarse grids while achieving an accuracy expected of a much finer resolution. In addition, the model has potential for assimilation of remotely sensed water elevations, to define boundary conditions based on water levels or river discharges and to improve model results. The model is applied to two river basins: the Susquehanna River in Pennsylvania, and the Ogeechee River in Florida. The two rivers represent different scales and span a wide range of topographic characteristics. Comparing spatial resolutions ranging between 30 m to 500 m in both river basins, the new technique was able to reduce simulation runtime by at least 25 fold
Suzette Payne
2006-04-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Suzette Payne
2007-08-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
NASA Astrophysics Data System (ADS)
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.
Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy
2015-06-24
In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building "learning cities" through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met.
Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy
2015-01-01
In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain
NASA Astrophysics Data System (ADS)
Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.
2015-04-01
were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.
Model uncertainties of the 2002 update of California seismic hazard maps
Cao, T.; Petersen, M.D.; Frankel, A.D.
2005-01-01
In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.
NASA Technical Reports Server (NTRS)
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
Visual Manipulatives for Proportional Reasoning.
ERIC Educational Resources Information Center
Moore, Joyce L.; Schwartz, Daniel L.
The use of a visual representation in learning about proportional relations was studied, examining students' understandings of the invariance of a multiplicative relation on both sides of a proportion equation and the invariance of the structural relations that exist in different semantic types of proportion problems. Subjects were 49 high-ability…
A First Comparison of Multiple Probability Hazard Outputs from Three Global Flood Models
NASA Astrophysics Data System (ADS)
Trigg, M. A.; Bates, P. D.; Fewtrell, T. J.; Yamazaki, D.; Pappenberger, F.; Winsemius, H.
2014-12-01
With research advances in algorithms, remote sensing data sets and computing power, global flood models are now a practical reality. There are a number of different research models currently available or in development, and as these models mature and output becomes available for use, there is great interest in how these different models compare and how useful they may be at different scales. At the kick-off meeting of the Global Flood Partnership (GFP) in March 2014, the need to compare these new global flood models was identified as a research priority, both for developers of the models and users of the output. The Global Flood Partnership (GFP) is an informal network of scientists and practitioners from public, private and international organisations providing or using global flood monitoring, modelling and forecasting. (http://portal.gdacs.org/Global-Flood-Partnership). On behalf of the GFP, The Willis Research Network is undertaking this comparison research and the work presented here is the result of the first phase of this comparison for three models; CaMa-Flood, GLOFRIS & ECMWF. The comparison analysis is undertaken for the entire African continent, identified by GFP members as the best location to facilitate data sharing by model teams and where there was the most interest from potential users of the model outputs. Initial analysis results include flooded area for a range of hazard return periods (25, 50, 100, 250, 500, 1000 years) and this is also compared against catchment sizes and climatic zone. Results will be discussed in the context of the different model structures and input data used, while also addressing scale issues and practicalities of use. Finally, plans for the validation of the models against microwave and optical remote sensing data will be outlined.
NASA Astrophysics Data System (ADS)
Turchaninova, A.
2012-04-01
The estimation of extreme avalanche runout distances, flow velocities, impact pressures and volumes is an essential part of snow engineering in mountain regions of Russia. It implies the avalanche hazard assessment and mapping. Russian guidelines accept the application of different avalanche models as well as approaches for the estimation of model input parameters. Consequently different teams of engineers in Russia apply various dynamics and statistical models for engineering practice. However it gives more freedom to avalanche practitioners and experts but causes lots of uncertainties in case of serious limitations of avalanche models. We discuss these problems by presenting the application results of different well known and widely used statistical (developed in Russia) and avalanche dynamics models for several avalanche test sites in the Khibini Mountains (The Kola Peninsula) and the Caucasus. The most accurate and well-documented data from different powder and wet, big rare and small frequent snow avalanche events is collected from 1960th till today in the Khibini Mountains by the Avalanche Safety Center of "Apatit". This data was digitized and is available for use and analysis. Then the detailed digital avalanche database (GIS) was created for the first time. It contains contours of observed avalanches (ESRI shapes, more than 50 years of observations), DEMs, remote sensing data, description of snow pits, photos etc. Thus, the Russian avalanche data is a unique source of information for understanding of an avalanche flow rheology and the future development and calibration of the avalanche dynamics models. GIS database was used to analyze model input parameters and to calibrate and verify avalanche models. Regarding extreme dynamic parameters the outputs using different models can differ significantly. This is unacceptable for the engineering purposes in case of the absence of the well-defined guidelines in Russia. The frequency curves for the runout distance
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Lava Flow Hazard Modeling during the 2014-2015 Fogo eruption, Cape Verde
NASA Astrophysics Data System (ADS)
Del Negro, C.; Cappello, A.; Ganci, G.; Calvari, S.; Perez, N. M.; Hernandez Perez, P. A.; Victoria, S. S.; Cabral, J.
2015-12-01
Satellite remote sensing techniques and lava flow forecasting models have been combined to allow an ensemble response during effusive crises at poorly monitored volcanoes. Here, we use the HOTSAT volcano hot spot detection system that works with satellite thermal infrared data and the MAGFLOW lava flow emplacement model that considers the way in which effusion rate changes during an eruption, to forecast lava flow hazards during the 2014-2015 Fogo eruption. In many ways this was one of the major effusive eruption crises of recent years, since the lava flows actually invaded populated areas. HOTSAT is used to promptly analyze MODIS and SEVIRI data to output hot spot location, lava thermal flux, and effusion rate estimation. We use this output to drive the MAGFLOW simulations of lava flow paths and to update continuously flow simulations. Satellite-derived TADR estimates can be obtained in real time and lava flow simulations of several days of eruption can be calculated in a few minutes, thus making such a combined approach of paramount importance to provide timely forecasts of the areas that a lava flow could possibly inundate. In addition, such forecasting scenarios can be continuously updated in response to changes in the eruptive activity as detected by satellite imagery. We also show how Landsat-8 OLI and EO-1 ALI images complement the field observations for tracking the flow front position through time, and add considerable data on lava flow advancement to validate the results of numerical simulations. Our results thus demonstrate how the combination of satellite remote sensing and lava flow modeling can be effectively used during eruptive crises to produce realistic lava flow hazard scenarios and for assisting local authorities in making decisions during a volcanic eruption.
Uncertainty quantification in satellite-driven modeling to forecast lava flow hazards
NASA Astrophysics Data System (ADS)
Ganci, Gaetana; Bilotta, Giuseppe; Cappello, Annalisa; Herault, Alexis; Zago, Vito; Del Negro, Ciro
2016-04-01
Over the last decades satellite-based remote sensing and data processing techniques have proved well suited to complement field observations to provide timely event detection for volcanic effusive events, as well as extraction of parameters allowing lava flow tracking. In parallel with this, physics-based models for lava flow simulations have improved enormously and are now capable of fast, accurate simulations, which are increasingly driven by, or validated using, satellite-derived parameters such as lava flow discharge rates. Together, these capabilities represent a prompt strategy with immediate applications to the real time monitoring and hazard assessment of effusive eruptions, but two important key issues still need to be addressed, to improve its effectiveness: (i) the provision of source term parameters and their uncertainties, (ii) how uncertainties in source terms propagate into the model outputs. We here address these topics considering uncertainties in satellite-derived products obtained by the HOTSAT thermal monitoring system (e.g. hotspot pixels, radiant heat flux, effusion rate) and evaluating how these uncertainties affect lava flow hazard scenarios by inputting them into the MAGFLOW physics-based model for lava flow simulations. Particular attention is given to topography and cloud effect on satellite-derived products as well as to the frequency of their acquisitions (GEO vs LEO). We also investigate how the DEM resolution impact final scenarios from both the numerical and physical points of view. To evaluate these effects, three different kinds of well documented eruptions occurred at Mt Etna are taken into account: a short-lived paroxysmal event, i.e. the 11-13 Jan 2011 lava fountain, a long lasting eruption, i.e. the 2008-2009 eruption, and a short effusive event, i.e. the 14-24 July 2006 eruption.
NASA Astrophysics Data System (ADS)
Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.
2010-05-01
Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
NASA Astrophysics Data System (ADS)
Chiang, S. H.; Chang, K. T.; Chen, Y. C.; Chen, C. F.
2014-12-01
The study proposes an integrated landslide-runout model, iLIR-w (Integrated Landslide Initiation prediction and landslide Runout simulation at Watershed level), to assess landslide hazard affected by typhoon. For rainfall-induced landslides, many landslide model have focused on the prediction of landslide locations, but few have incorporated the prediction of landslide timing and landslide runouts in one single modeling framework. iLIR-w combines an integrated landslide model for predicting shallow landslides and a watershed-scale runout simulation to simulate the coupled processes related to landslide hazard. The study developed the model in a watershed in southern Taiwan, by using landslide inventories prepared after eight historical typhoon events (2001-2008). The study then tested iLIR-w by incorporating typhoon rainfall forecasts from the Taiwan Cooperative Precipitation Ensemble Forecast Experiment (TAPEX) to practice landslide hazard early warning of 6 h, 12 h, 24 h, 48 h before the arrival of Typhoon Morakot which seriously damaged Southern Taiwan in 2009. The model performs reasonably well in the prediction of landslide locations, timing and runouts. Therefore, the model is expected to be useful for landslide hazard prevention, and can be applied to other watersheds with similar environment, assuming that reliable model parameters are available.
Numerical modelling for real-time forecasting of marine oil pollution and hazard assessment
NASA Astrophysics Data System (ADS)
De Dominicis, Michela; Pinardi, Nadia; Bruciaferri, Diego; Liubartseva, Svitlana
2015-04-01
(MEDESS4MS) system, which is an integrated operational multi-model oil spill prediction service, that can be used by different users to run simulations of oil spills at sea, even in real time, through a web portal. The MEDESS4MS system gathers different oil spill modelling systems and data from meteorological and ocean forecasting systems, as well as operational information on response equipment, together with environmental and socio-economic sensitivity maps. MEDSLIK-II has been also used to provide an assessment of hazard stemming from operational oil ship discharges in the Southern Adriatic and Northern Ionian (SANI) Seas. Operational pollution resulting from ships consists of a movable hazard with a magnitude that changes dynamically as a result of a number of external parameters varying in space and time (temperature, wind, sea currents). Simulations of oil releases have been performed with realistic oceanographic currents and the results show that the oil pollution hazard distribution has an inherent spatial and temporal variability related to the specific flow field variability.
Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.
NASA Astrophysics Data System (ADS)
Malet, Jean-Philippe; Remaître, Alexandre
2015-04-01
Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance
Diao, Guoqing; Zeng, Donglin; Yang, Song
2013-12-01
The proportional hazards assumption in the commonly used Cox model for censored failure time data is often violated in scientific studies. Yang and Prentice (2005) proposed a novel semiparametric two-sample model that includes the proportional hazards model and the proportional odds model as sub-models, and accommodates crossing survival curves. The model leaves the baseline hazard unspecified and the two model parameters can be interpreted as the short-term and long-term hazard ratios. Inference procedures were developed based on a pseudo score approach. Although extension to accommodate covariates was mentioned, no formal procedures have been provided or proved. Furthermore, the pseudo score approach may not be asymptotically efficient. We study the extension of the short-term and long-term hazard ratio model of Yang and Prentice (2005) to accommodate potentially time-dependent covariates. We develop efficient likelihood-based estimation and inference procedures. The nonparametric maximum likelihood estimators are shown to be consistent, asymptotically normal, and asymptotically efficient. Extensive simulation studies demonstrate that the proposed methods perform well in practical settings. The proposed method successfully captured the phenomenon of crossing hazards in a cancer clinical trial and identified a genetic marker with significant long-term effect missed by using the proportional hazards model on age-at-onset of alcoholism in a genetic study.
Modeling the combustion behavior of hazardous waste in a rotary kiln incinerator.
Yang, Yongxiang; Pijnenborg, Marc J A; Reuter, Markus A; Verwoerd, Joep
2005-01-01
Hazardous wastes have complex physical forms and chemical compositions and are normally incinerated in rotary kilns for safe disposal and energy recovery. In the rotary kiln, the multifeed stream and wide variation of thermal, physical, and chemical properties of the wastes cause the incineration system to be highly heterogeneous, with severe temperature fluctuations and unsteady combustion chemistry. Incomplete combustion is often the consequence, and the process is difficult to control. In this article, modeling of the waste combustion is described by using computational fluid dynamics (CFD). Through CFD simulation, gas flow and mixing, turbulent combustion, and heat transfer inside the incinerator were predicted and visualized. As the first step, the waste in various forms was modeled to a hydrocarbon-based virtual fuel mixture. The combustion of the simplified waste was then simulated with a seven-gas combustion model within a CFD framework. Comparison was made with previous global three-gas combustion model with which no chemical behavior can be derived. The distribution of temperature and chemical species has been investigated. The waste combustion model was validated with temperature measurements. Various operating conditions and the influence on the incineration performance were then simulated. Through this research, a better process understanding and potential optimization of the design were attained.
NASA Astrophysics Data System (ADS)
Patra, A. K.; Connor, C.; Webley, P.; Jones, M.; Charbonnier, S. J.; Connor, L.; Gallo, S.; Bursik, M. I.; Valentine, G.; Hughes, C. G.; Aghakhani, H.; Renschler, C. S.; Kosar, T.
2014-12-01
We report here on an effort to improve the sustainability, robustness and usability of the core modeling and simulation tools housed in the collaboratory VHub.org and used in the study of complex volcanic behavior. In particular, we focus on tools that support large scale mass flows (TITAN2D), ash deposition/transport and dispersal (Tephra2 and PUFF), and lava flows (Lava2). These tools have become very popular in the community especially due to the availability of an online usage modality. The redevelopment of the tools ot take advantage of new hardware and software advances was a primary thrust for the effort. However, as we start work we have reoriented the effort to also take advantage of significant new opportunities for supporting the complex workflows and use of distributed data resources that will enable effective and efficient hazard analysis.
Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation
NASA Astrophysics Data System (ADS)
Borga, M.; Creutin, J. D.
Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two
NASA Astrophysics Data System (ADS)
Dahal, Ranjan Kumar; Hasegawa, Shuichi; Nonomura, Atsuko; Yamanaka, Minoru; Dhakal, Santosh; Paudyal, Pradeep
2008-12-01
Landslide hazard mapping is a fundamental tool for disaster management activities in mountainous terrains. The main purpose of this study is to evaluate the predictive power of weights-of-evidence modelling in landslide hazard assessment in the Lesser Himalaya of Nepal. The modelling was performed within a geographical information system (GIS), to derive a landslide hazard map of the south-western marginal hills of the Kathmandu Valley. Thematic maps representing various factors (e.g., slope, aspect, relief, flow accumulation, distance to drainage, soil depth, engineering soil type, landuse, geology, distance to road and extreme one-day rainfall) that are related to landslide activity were generated, using field data and GIS techniques, at a scale of 1:10,000. Landslide events of the 1970s, 1980s, and 1990s were used to assess the Bayesian probability of landslides in each cell unit with respect to the causative factors. To assess the accuracy of the resulting landslide hazard map, it was correlated with a map of landslides triggered by the 2002 extreme rainfall events. The accuracy of the map was evaluated by various techniques, including the area under the curve, success rate and prediction rate. The resulting landslide hazard value calculated from the old landslide data showed a prediction accuracy of > 80%. The analysis suggests that geomorphological and human-related factors play significant roles in determining the probability value, while geological factors play only minor roles. Finally, after the rectification of the landslide hazard values of the new landslides using those of the old landslides, a landslide hazard map with > 88% prediction accuracy was prepared. The methodology appears to have extensive applicability to the Lesser Himalaya of Nepal, with the limitation that the model's performance is contingent on the availability of data from past landslides.
Boissonnade, A; Hossain, Q; Kimball, J
2000-07-20
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States.
NASA Astrophysics Data System (ADS)
Biswajeet, Pradhan; Saro, Lee
The aim of this study is to evaluate landslide hazard analysis at Selangor area, Malaysia using optical remote sensing data and a Geographic Information System (GIS). Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical, geological data and satellite images were collected, processed and constructed into a spatial database using GIS and image processing. A total of 10 landslide occurrence factors that were selected including topographic slope, topographic aspect, topographic curvature and distance from drainage; lithology and distance from lineament; land cover from TM satellite images; the vegetation index value from Landsat satellite images; precipitation data. These factors were analyzed using an advanced artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide locations were used to verify results of the landslide hazard map and the verification results showed 82.92% accuracy. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.
NASA Astrophysics Data System (ADS)
Allen, S. K.; Schneider, D.; Owens, I. F.
2009-03-01
Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.
... you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint ...
A fast, calibrated model for pyroclastic density currents kinematics and hazard
NASA Astrophysics Data System (ADS)
Esposti Ongaro, Tomaso; Orsucci, Simone; Cornolti, Fulvio
2016-11-01
Multiphase flow models represent valuable tools for the study of the complex, non-equilibrium dynamics of pyroclastic density currents. Particle sedimentation, flow stratification and rheological changes, depending on the flow regime, interaction with topographic obstacles, turbulent air entrainment, buoyancy reversal, and other complex features of pyroclastic currents can be simulated in two and three dimensions, by exploiting efficient numerical solvers and the improved computational capability of modern supercomputers. However, numerical simulations of polydisperse gas-particle mixtures are quite computationally expensive, so that their use in hazard assessment studies (where there is the need of evaluating the probability of hazardous actions over hundreds of possible scenarios) is still challenging. To this aim, a simplified integral (box) model can be used, under the appropriate hypotheses, to describe the kinematics of pyroclastic density currents over a flat topography, their scaling properties and their depositional features. In this work, multiphase flow simulations are used to evaluate integral model approximations, to calibrate its free parameters and to assess the influence of the input data on the results. Two-dimensional numerical simulations describe the generation and decoupling of a dense, basal layer (formed by progressive particle sedimentation) from the dilute transport system. In the Boussinesq regime (i.e., for solid mass fractions below about 0.1), the current Froude number (i.e., the ratio between the current inertia and buoyancy) does not strongly depend on initial conditions and it is consistent to that measured in laboratory experiments (i.e., between 1.05 and 1.2). For higher density ratios (solid mass fraction in the range 0.1-0.9) but still in a relatively dilute regime (particle volume fraction lower than 0.01), numerical simulations demonstrate that the box model is still applicable, but the Froude number depends on the reduced
Kowalski, Amanda E.
2015-01-01
Insurance induces a tradeoff between the welfare gains from risk protection and the welfare losses from moral hazard. Empirical work traditionally estimates each side of the tradeoff separately, potentially yielding mutually inconsistent results. I develop a nonlinear budget set model of health insurance that allows for both simultaneously. Nonlinearities in the budget set arise from deductibles, coinsurance rates, and stoplosses that alter moral hazard as well as risk protection. I illustrate the properties of my model by estimating it using data on employer sponsored health insurance from a large firm. PMID:26664035
A comparative analysis of hazard models for predicting debris flows in Madison County, VA
Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.
2001-01-01
During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).
An investigation on the modelling of kinetics of thermal decomposition of hazardous mercury wastes.
Busto, Yailen; M G Tack, Filip; Peralta, Luis M; Cabrera, Xiomara; Arteaga-Pérez, Luis E
2013-09-15
The kinetics of mercury removal from solid wastes generated by chlor-alkali plants were studied. The reaction order and model-free method with an isoconversional approach were used to estimate the kinetic parameters and reaction mechanism that apply to the thermal decomposition of hazardous mercury wastes. As a first approach to the understanding of thermal decomposition for this type of systems (poly-disperse and multi-component), a novel scheme of six reactions was proposed to represent the behaviour of mercury compounds in the solid matrix during the treatment. An integration-optimization algorithm was used in the screening of nine mechanistic models to develop kinetic expressions that best describe the process. The kinetic parameters were calculated by fitting each of these models to the experimental data. It was demonstrated that the D₁-diffusion mechanism appeared to govern the process at 250°C and high residence times, whereas at 450°C a combination of the diffusion mechanism (D₁) and the third order reaction mechanism (F3) fitted the kinetics of the conversions. The developed models can be applied in engineering calculations to dimension the installations and determine the optimal conditions to treat a mercury containing sludge.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Traas, T.P.; Janse, J.H.; Brock, T.C.M.; Aldenberg, T.
1994-12-31
Ecotoxicological risk assessments usually focus on fate of chemicals or extrapolation of single-species bioassays. Several attempts have been made to integrate these two approaches, in order to predict ecosystem effects. Ecological effects of chemicals are notoriously difficult to predict. Extensive mesocosm experiments were available to study ecological effects of chlorpyrifos. The authors integrated fate and concentration-response curves of chlorpyrifos and cadmium in an ecotoxicological model. The model consists of compartments for water, sediment, and species lumped in functional groups. Cycling of organic matter is the backbone of the model, acting as carrier for the toxicant. Concentration-response curves of chlorpyrifos were determined on species present in the mesocosms. Transient effects of a single dose of chlorpyrifos on the mesocosms can be simulated with the model. Based on measured direct and indirect effects in the mesocosms, the authors propose a method to calculate the maximum allowable concentration for ecological damage. The authors call this the Hazardous Concentration for the Ecosystem (HCE). The HCE is reached if biomass anywhere in the food web falls outside the normal biomass bandwidth.
Lava flow hazard modeling during the 2014-2015 Fogo eruption, Cape Verde
NASA Astrophysics Data System (ADS)
Cappello, Annalisa; Ganci, Gaetana; Calvari, Sonia; Pérez, Nemesio M.; Hernández, Pedro A.; Silva, Sónia V.; Cabral, Jeremias; Del Negro, Ciro
2016-04-01
Satellite remote sensing techniques and lava flow forecasting models have been combined to enable a rapid response during effusive crises at poorly monitored volcanoes. Here we used the HOTSAT satellite thermal monitoring system and the MAGFLOW lava flow emplacement model to forecast lava flow hazards during the 2014-2015 Fogo eruption. In many ways this was one of the major effusive eruption crises of recent years, since the lava flows actually invaded populated areas. Combining satellite data and modeling allowed mapping of the probable evolution of lava flow fields while the eruption was ongoing and rapidly gaining as much relevant information as possible. HOTSAT was used to promptly analyze MODIS and SEVIRI data to output hot spot location, lava thermal flux, and effusion rate estimation. This output was used to drive the MAGFLOW simulations of lava flow paths and to continuously update flow simulations. We also show how Landsat 8 OLI and EO-1 ALI images complement the field observations for tracking the flow front position through time and adding considerable data on lava flow advancement to validate the results of numerical simulations. The integration of satellite data and modeling offers great promise in providing a unified and efficient system for global assessment and real-time response to effusive eruptions, including (i) the current state of the effusive activity, (ii) the probable evolution of the lava flow field, and (iii) the potential impact of lava flows.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including
NASA Astrophysics Data System (ADS)
Zhong, Q.; Shi, B.; Meng, L.
2010-12-01
The North China is one of the most seismically active regions in the mainland China. The moderate to large earthquakes have occurred here throughout history, resulting in huge losses of human life and properties. With the probabilistic seismic hazard analysis (PSHA) approach, we investigate the influence of different seismic environments, incorporating both near surface soil properties and distributed historical and modern seismicity. A simplified seismic source model, derived with the consideration of regional active fault distributions, is presented for the North China region. The spatial distributed seismicity model of PSHA is used to calculate the level of ground motion likely to be exceeded in a given time period. Following Frankel (1995) approach of circular Gaussian smoothing procedure, in the PSHA’s calculation, we proposed the fault-rupture-oriented elliptical Gaussian smoothing with the assumptions that earthquakes occur on faults or fault zones of past earthquakes to delineate the potential seismic zones (Lapajine et al., 2003). This is combined with regional active fault strike directions and the seismicity distribution patterns. Next Generation Attenuation model ((NGA), Boore et al., 2007) is used in generating hazard map for PGA with 2%, 5%, and 10 % probability of being exceeded in 50 years, and the resultant hazard map is compared with the result given by Global Seismic Hazard Assessment Project (GSHAP). There is general agreement for PGA distribution patterns between the results of this study and the GSHAP map that used the same seismic source zones. However, peak ground accelerations predicted in this study are typically 10-20% less than those of the GSHAP, and the seismic source models, such as fault distributions and regional seismicity used in the GSHAP seem to be oversimplified. We believe this study represents an improvement on prior seismic hazard evaluations for the region. In addition to the updated input data, we believe that, by
Mathematical models and methods of risk assessment in ecologically hazardous industries
Mikhalevich, V.S.; Knopov, P.S.; Golodnikov, A.N.
1994-11-01
Analysis of critical industrial situations leading to accidents or catastrophes has shown that the main factors responsible for accidents include technological inadequacy of ecologically hazardous facilities, equipment design errors, and insufficient preventive maintenance of facilities with an enhanced level of environmental hazard. The scale of the accident after-effects essentially depends on the location of the ecologically hazardous facility, timely development of preventive measures, and prompt implementations of these measures in emergency in compliance with strict deadlines for decision making.
A global vegetation corrected SRTM DEM for use in hazard modelling
NASA Astrophysics Data System (ADS)
Bates, P. D.; O'Loughlin, F.; Neal, J. C.; Durand, M. T.; Alsdorf, D. E.; Paiva, R. C. D.
2015-12-01
We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hazard modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hazard modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. Our new 'Bare-Earth' SRTM DEM combines multiple remote sensing datasets, including ICESat GLA14 ground elevations, the vegetation continuous field dataset as a proxy for penetration depth of SRTM and a global vegetation height map, to remove the vegetation artefacts present in the original SRTM DEM. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As
AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.
NASA Astrophysics Data System (ADS)
Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie
2015-04-01
Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.
Simulation of the 1992 Tessina landslide by a cellular automata model and future hazard scenarios
NASA Astrophysics Data System (ADS)
Avolio, MV; Di Gregorio, Salvatore; Mantovani, Franco; Pasuto, Alessandro; Rongo, Rocco; Silvano, Sandro; Spataro, William
Cellular Automata are a powerful tool for modelling natural and artificial systems, which can be described in terms of local interactions of their constituent parts. Some types of landslides, such as debris/mud flows, match these requirements. The 1992 Tessina landslide has characteristics (slow mud flows) which make it appropriate for modelling by means of Cellular Automata, except for the initial phase of detachment, which is caused by a rotational movement that has no effect on the mud flow path. This paper presents the Cellular Automata approach for modelling slow mud/debris flows, the results of simulation of the 1992 Tessina landslide and future hazard scenarios based on the volumes of masses that could be mobilised in the future. They were obtained by adapting the Cellular Automata Model called SCIDDICA, which has been validated for very fast landslides. SCIDDICA was applied by modifying the general model to the peculiarities of the Tessina landslide. The simulations obtained by this initial model were satisfactory for forecasting the surface covered by mud. Calibration of the model, which was obtained from simulation of the 1992 event, was used for forecasting flow expansion during possible future reactivation. For this purpose two simulations concerning the collapse of about 1 million m 3 of material were tested. In one of these, the presence of a containment wall built in 1992 for the protection of the Tarcogna hamlet was inserted. The results obtained identified the conditions of high risk affecting the villages of Funes and Lamosano and show that this Cellular Automata approach can have a wide range of applications for different types of mud/debris flows.
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, L.; Vogel, R. M.
2015-12-01
Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.
Conceptual model of volcanism and volcanic hazards of the region of Ararat valley, Armenia
NASA Astrophysics Data System (ADS)
Meliksetian, Khachatur; Connor, Charles; Savov, Ivan; Connor, Laura; Navasardyan, Gevorg; Manucharyan, Davit; Ghukasyan, Yura; Gevorgyan, Hripsime
2015-04-01
Armenia and the adjacent volcanically active regions in Iran, Turkey and Georgia are located in the collision zone between the Arabian and Eurasian lithospheric plates. The majority of studies of regional collision related volcanism use the model proposed by Keskin, (2003) where volcanism is driven by Neo-Tethyan slab break-off. In Armenia, >500 Quaternary-Holocene volcanoes from the Gegham, Vardenis and Syunik volcanic fields are hosted within pull-apart structures formed by active faults and their segments (Karakhanyan et al., 2002), while tectonic position of the large in volume basalt-dacite Aragats volcano and periphery volcanic plateaus is different and its position away from major fault lines necessitates more complex volcano-tectonic setup. Our detailed volcanological, petrological and geochemical studies provide insight into the nature of such volcanic activity in the region of Ararat Valley. Most magmas, such as those erupted in Armenia are volatile-poor and erupt fairly hot. Here we report newly discovered tephra sequences in Ararat valley, that were erupted from historically active Ararat stratovolcano and provide evidence for explosive eruption of young, mid K2O calc-alkaline and volatile-rich (>4.6 wt% H2O; amph-bearing) magmas. Such young eruptions, in addition to the ignimbrite and lava flow hazards from Gegham and Aragats, present a threat to the >1.4 million people (~ ½ of the population of Armenia). We will report numerical simulations of potential volcanic hazards for the region of Ararat valley near Yerevan that will include including tephra fallout, lava flows and opening of new vents. Connor et al. (2012) J. Applied Volcanology 1:3, 1-19; Karakhanian et al. (2002), JVGR, 113, 319-344; Keskin, M. (2003) Geophys. Res. Lett. 30, 24, 8046.
NASA Astrophysics Data System (ADS)
Plesko, C. Weaver, R.; Clement, R.; Bradley, P.; Huebner, W.
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term feasibility and appropriate application of all proposed methods. Recent and ongoing ground and space-based observations of small solar system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the objects physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor. Preliminary results for models of the deflection of a 100 m basalt sphere by a 100 kt nuclear burst (Bradley et al., LPSC 2009) are encouraging. A 40 cm/s velocity away from the burst is imparted to the objects center of mass without disruption. Further results will be presented at the meeting.
Cellular parameters for track structure modelling of radiation hazard in space
NASA Astrophysics Data System (ADS)
Hollmark, M.; Lind, B.; Gudowska, I.; Waligorski, M.
Based on irradiation with 45 MeV/u N and B ions and with Co-60 gamma rays, track structure cellular parameters have been fitted for V 79-379A Chinese hamster lung fibroblasts and for human melanoma cells (AA wtp53). These sets of parameters will be used to develop a calculation of radiation hazard in deep space, based on the system for evaluating, summing and reporting occupational exposures proposed in 1967 by subcommittee of the NCRP, but never issued as an NCRP report. The key concepts of this system were: i) expression of the risk from all radiation exposures relative to that from a whole-body exposure to Co-60 radiation; ii) relating the risk from any exposure to that of the standard (Co-60) radiation through an "effectiveness factor" (ef), a product of sub-factors representing radiation quality, body region irradiated, and depth of penetration of radiation; the product of absorbed dose by ef being termed the "exposure record unit" (eru); iii) development of ef values and a cumulative eru record for external and internal emitters. Application of this concept should provide a better description of the Gy -equivalent presently in use by NASA for evaluating risk in deep space than the equivalent dose, following ICRP-60 recommendations. Dose and charged particle fluence levels encountered in space, particularly after Solar Particle Events, require that deterministic rather than stochastic effects be considered. Also, synergistic effects due to simultaneous multiple charged particle transfers, may have to be considered. Thus, models applicable in radiotherapy, where the Gy -equivalent is also applied, in conjunction with transport calculations performed using, e.g. the ADAM and EVA phantoms, along the concepts of the 1967 NCRP system, may be more appropriate for evaluating the radiation hazard from external fields with a large flux and a major high-LET component.
Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure-activity relationships.
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
Non-Volcanic release of CO2 in Italy: quantification, conceptual models and gas hazard
NASA Astrophysics Data System (ADS)
Chiodini, G.; Cardellini, C.; Caliro, S.; Avino, R.
2011-12-01
Central and South Italy are characterized by the presence of many reservoirs naturally recharged by CO2 of deep provenance. In the western sector, the reservoirs feed hundreds of gas emissions at the surface. Many studies in the last years were devoted to (i) elaborating a map of CO2 Earth degassing of the region; (ii) to asses the gas hazard; (iii) to develop methods suitable for the measurement of the gas fluxes from different types of emissions; (iv) to elaborate the conceptual model of Earth degassing and its relation with the seismic activity of the region and (v) to develop physical numerical models of CO2 air dispersion. The main results obtained are: 1) A general, regional map of CO2 Earth degassing in Central Italy has been elaborated. The total flux of CO2 in the area has been estimated in ~ 10 Mt/a which are released to the atmosphere trough numerous dangerous gas emissions or by degassing spring waters (~ 10 % of the CO2 globally estimated to be released by the Earth trough volcanic activity). 2) An on line, open access, georeferenced database of the main CO2 emissions (~ 250) was settled up (http://googas.ov.ingv.it). CO2 flux > 100 t/d characterise 14% of the degassing sites while CO2 fluxes from 100 t/d to 10 t/d have been estimated for about 35% of the gas emissions. 3) The sites of the gas emissions are not suitable for life: the gas causes many accidents to animals and people. In order to mitigate the gas hazard a specific model of CO2 air dispersion has been developed and applied to the main degassing sites. A relevant application regarded Mefite d'Ansanto, southern Apennines, which is the largest natural emission of low temperature CO2 rich gases, from non-volcanic environment, ever measured in the Earth (˜2000 t/d). Under low wind conditions, the gas flows along a narrow natural channel producing a persistent gas river which has killed over a period of time many people and animals. The application of the physical numerical model allowed us to
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; Baker, Nathan A.; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto D.; Harper, Stacey
2015-06-04
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; ...
2015-06-04
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
Enhancing Students' Understanding of Risk and Geologic Hazards Using a Dartboard Model.
ERIC Educational Resources Information Center
Lutz, Timothy M.
2001-01-01
Uses dartboards to represent magnitude-frequency relationships of natural hazards which engage students at different levels of preparation in different contexts, and for different lengths of time. Helps students to mitigate the misconceptions that processes occur periodically by emphasizing the random nature of hazards. Includes 12 references.…
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood
Combining SLBL routine with landslide-generated tsunami model for a quick hazard assessment tool
NASA Astrophysics Data System (ADS)
Franz, Martin; Rudaz, Benjamin; Jaboyedoff, Michel; Podladchikov, Yury
2016-04-01
Regions with steep topography are potentially subject to landslide-induced tsunami, because of the proximity between lakes, rivers, sea shores and potential instabilities. The concentration of the population and infrastructures on the water body shores and downstream valleys could lead to catastrophic consequences. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a tool which allows the construction of the landslide geometry, and which is able to simulate its propagation, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. The tool is developed in the Matlab© environment, with a graphical user interface (GUI) to select the parameters in a user-friendly manner. The whole process is done in three steps implying different methods. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The intensity map is based on the criterion of flooding in Switzerland provided by the OFEG and results from the multiplication of the velocity and the depth obtained during the simulation. The tool can be used for hazard assessment in the case of well-known landslides, where the SLBL routine can be constrained and checked for realistic construction of the geometrical model. In less-known cases, various failure plane geometries can be automatically built between given range and thus a multi-scenario approach is used. In any case, less-known parameters such as the landslide velocity, its run-out distance, etc. can also be set to vary within given ranges, leading to multi
NASA Astrophysics Data System (ADS)
Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo
2013-04-01
The Cordilleras in Peru are especially vulnerable to, and affected by impacts from climate change. Local communities and cities often exist directly within the reach of major hazard potentials such as lake outburst floods (aluviones), mud-/debris flows (huaycos) or large rock-/ice avalanches. They have been repeatedly and strongly affected these regions over the last decades and since the last century, and thousands of people have been killed. One of the most recent events in the Cordillera Blanca occurred on 11 April 2010, when a rock/ice avalanche from the top of Hualcán mountain, NE of the town of Carhuaz impacted the glacier lake 513 (Laguna 513), caused displacement waves and triggered an outburst flood wave. The flow repeatedly transformed from debris flow to hyperconcentrated flow and eventually caused significant damage in Carhuaz. This event was motivation to start early warning and prevention efforts to reduce risks related to ice/rock avalanches and glacier lake outburst floods (GLOF). One of the basic components of an early warning system is the assessment, understanding and communication of relevant hazards and risks. Here we report on the methodology and results of generating GLOF related hazard maps for Carhuaz based on numerical modeling and field work. This exercise required an advanced concept and implementation of different mass movement models. Specifically, numerical models were applied for simulating avalanche flow, avalanche lake impact, displacement wave generation and lake overtopping, and eventually flow propagation of the outburst flood with changing rheology between debris flow and hyperconcentrated flows. We adopted a hazard mapping procedure slightly adjusted adjusted from guidelines developed in Switzerland and in the Andes region. A methodology has thereby been developed to translate results from numerical mass movement modeling into hazard maps. The resulting hazard map was verified and adjusted during field work. This study shows
Multiple Ways to Solve Proportions
ERIC Educational Resources Information Center
Ercole, Leslie K.; Frantz, Marny; Ashline, George
2011-01-01
When solving problems involving proportions, students may intuitively draw on strategies that connect to their understanding of fractions, decimals, and percents. These two statements--"Instruction in solving proportions should include methods that have a strong intuitive basis" and "Teachers should begin instruction with more intuitive…
NASA Astrophysics Data System (ADS)
Climent, A.; Benito, M. B.; Piedra, R.; Lindholm, C.; Gaspar-Escribano, J.
2013-05-01
We present the results of a study aimed at choosing the more suitable strong-motion models for seismic hazard analysis in the Central America (CA) Region. After a careful revision of the state of the art, different models developed for subduction and volcanic crustal zones, in tectonic environment similar to those of CA, were selected. These models were calibrated with accelerograms recorded in Costa Rica, Nicaragua and El Salvador. The peak ground acceleration PGA and Spectral Acceleration SA (T) derived from the records were compared with the ones predicted by the models in similar conditions of magnitude, distance and soil. The type of magnitude (Ms, Mb, MW), distance (Rhyp, Rrup, etc) and ground motion parameter (maximum horizontal component, geometrical mean, etc ) was taken into account in the comparison with the real data. As results of the analysis, the models which present a best fit with the local data were identified. These models have been applied for carrying out seismic hazard analysis in the region, in the frame of the RESIS II project financed by the Norwegian Foreign Department and also by the Spanish project SISMOCAES. The methodology followed is based on the direct comparison between PGA and SA 5 % damped response values extracted from actual records with the corresponding acceleration values predicted by the selected ground-motion models for similar magnitude, distance and soil conditions. Residuals between observed and predicted values for PGA, and SA (1sec) are calculated and plotted as a function of distance and magnitude, analyzing their deviation from the mean value. Besides and most important, a statistical analysis of the normalized residuals was carry out using the criteria proposed by Scherbaum et al. (2004), which consists in categorizing ground motion models based in a likelihood parameter that reflects the goodness-of-fit of the median values as well as the shape of the underlying distribution of ground motion residuals. Considering
Geochemical transformations and modeling of two deep-well injected hazardous wastes
Roy, W.R.; Seyler, B.; Steele, J.D.; Mravik, S.C.; Moore, D.M.; Krapac, I.G.; Peden, J.M.; Griffin, R.A.
1991-01-01
Two liquid hazardous wastes (an alkaline brine-like solution and a dilute acidic waste) were mixed with finely ground rock samples of three injection-related lithologies (sandstone, dolomite, and siltstone) for 155 to 230 days at 325??K-10.8 MPa. The pH and inorganic chemical composition of the alkaline waste were not significantly altered by any of the rock samples after 230 days of mixing. The acidic waste was neutralized as a consequence of carbonate dissolution, ion exchange, or clay-mineral dissolution, and hence was transformed into a nonhazardous waste. Mixing the alkaline waste with the solid phases yielded several reaction products: brucite, Mg(OH)2; calcite, CaCO3; and possibly a type of sodium metasilicate. Clay-like minerals formed in the sandstone, and hydrotalcite, Mg6Al2-CO3(OH)16??4H2O, may have formed in the siltstone at trace levels. Mixing the alkaline waste with a synthetic brine yielded brucite, calcite, and whewellite (CaC2O4??H2O). The thermodynamic model PHRQPITZ predicted that brucite and calcite would precipitate from solution in the dolomite and siltstone mixtures and in the alkaline waste-brine system. The dilute acidic waste did not significantly alter the mineralogical composition of the three rock types after 155 days of contact. The model PHREEQE indicated that the calcite was thermodynamically stable in the dolomite and siltstone mixtures.
Predictive models in hazard assessment of Great Lakes contaminants for fish
Passino, Dora R. May
1986-01-01
A hazard assessment scheme was developed and applied to predict potential harm to aquatic biota of nearly 500 organic compounds detected by gas chromatography/mass spectrometry (GC/MS) in Great Lakes fish. The frequency of occurrence and estimated concentrations of compounds found in lake trout (Salvelinus namaycush) and walleyes (Stizostedion vitreum vitreum) were compared with available manufacturing and discharge information. Bioconcentration potential of the compounds was estimated from available data or from calculations of quantitative structure-activity relationships (QSAR). Investigators at the National Fisheries Research Center-Great Lakes also measured the acute toxicity (48-h EC50's) of 35 representative compounds to Daphnia pulex and compared the results with acute toxicity values generated by QSAR. The QSAR-derived toxicities for several chemicals underestimated the actual acute toxicity by one or more orders of magnitude. A multiple regression of log EC50 on log water solubility and molecular volume proved to be a useful predictive model. Additional models providing insight into toxicity incorporate solvatochromic parameters that measure dipolarity/polarizability, hydrogen bond acceptor basicity, and hydrogen bond donor acidity of the solute (toxicant).
Schlyter, F; Svensson, M; Zhang, Q H; Knízek, M; Krokene, P; Ivarsson, P; Birgersson, G
2001-07-01
Pheromone communication systems have a reliable signal with a restricted window of amounts and ratios released and perceived. We propose a model based on a Gaussian response profile that allows a quantification of the response peak (location of optimum) and a measure of the peak width (response window). Interpreting the Gaussian curve, fitted by nonlinear regression (NLR), as a standard normal distribution, the peak location equals the mean (it) and the window width equals 2 x the standard deviation (2sigma). The NLR procedure can provide an objective measure for both peak location and width for a wide range of data sets. Four empirical data sets as well as 10 literature data sets were analyzed. The double-spined spruce engraver, Ips duplicatus, was field tested in four populations to find the optimum proportion for attraction to the two male aggregation pheromone components, ipsdienol (Id) and (E)-myrcenol(EM), ranging from 0 to 100% of Id. Tests in Norway and the Czech Republic confirmed the preference of western populations for a blend between 50 and 90% Id. A population in Inner Mongolia showed a preference for traps with the 10 and 50% Id baits. The NLR fitted values for response peak and width (mu; 2sigma) were: Norway 0.64, 0.73; Czech Republic 0.53, 0.73; NE China 0.77, 0.29; and Inner Mongolia 0.33, 0.50. The signal produced by Norwegian field-collected males had a narrower window width (2sigma = 0.12). Males of the maize stem borer, Chilo partellus, were tested in a flight tunnel for their response to variation in the two major female sex pheromone gland components, (Z)- l1-hexadecenal and the corresponding alcohol (OH). Variation of the alcohol in seven levels from 2 to 29% OH showed the highest male response for 17% OH. For all behavioral steps, the peak of male response was near mu = 0.14, while the window width fell from 2sigma = 0.5 to 0.2 for eight sequential behavioral steps from take-off to copulation. Female production had a similar peak location
NASA Astrophysics Data System (ADS)
Krishna, Akhouri P.; Kumar, Santosh
2013-10-01
Landslide hazard assessments using computational models, such as artificial neural network (ANN) and frequency ratio (FR), were carried out covering one of the important mountain highways in the Central Himalaya of Indian Himalayan Region (IHR). Landslide influencing factors were either calculated or extracted from spatial databases including recent remote sensing data of LANDSAT TM, CARTOSAT digital elevation model (DEM) and Tropical Rainfall Measuring Mission (TRMM) satellite for rainfall data. ANN was implemented using the multi-layered feed forward architecture with different input, output and hidden layers. This model based on back propagation algorithm derived weights for all possible parameters of landslides and causative factors considered. The training sites for landslide prone and non-prone areas were identified and verified through details gathered from remote sensing and other sources. Frequency Ratio (FR) models are based on observed relationships between the distribution of landslides and each landslide related factor. FR model implementation proved useful for assessing the spatial relationships between landslide locations and factors contributing to its occurrence. Above computational models generated respective susceptibility maps of landslide hazard for the study area. This further allowed the simulation of landslide hazard maps on a medium scale using GIS platform and remote sensing data. Upon validation and accuracy checks, it was observed that both models produced good results with FR having some edge over ANN based mapping. Such statistical and functional models led to better understanding of relationships between the landslides and preparatory factors as well as ensuring lesser levels of subjectivity compared to qualitative approaches.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, B.; SCEC ITR Collaboration
2003-12-01
The Southern California Earthquake Center (SCEC) has formed a Geoscience/IT partnership to develop an advanced information infrastructure for system-level earthquake science in Southern California. This SCEC/ITR partnership comprises SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. This collaboration recently completed the second year in a five-year National Science Foundation (NSF) funded ITR project called the SCEC Community Modeling Environment (SCEC/CME). The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed by project collaborators include a Probabilistic Seismic Hazard Analysis system called OpenSHA [Field et al., this meeting]. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERF's). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. A Rupture Dynamic Model (RDM) has also been developed that couples a rupture dynamics simulation into an anelastic wave model. The collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of SHA programs. To support computationally expensive simulations, we have constructed a grid-based system utilizing Globus software [Kesselman et al., this meeting]. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC, NPACI and Teragrid High Performance Computing Centers. We have
NASA Astrophysics Data System (ADS)
Eble, M. C.; uslu, B. U.; Wright, L.
2013-12-01
Synthetic tsunamis generated from source regions around the Pacific Basin are analyzed in terms of their relative impact on United States coastal locations.. The region of tsunami origin is as important as the expected magnitude and the predicted inundation for understanding tsunami hazard. The NOAA Center for Tsunami Research has developed high-resolution tsunami models capable of predicting tsunami arrival time and amplitude of waves at each location. These models have been used to conduct tsunami hazard assessments to assess maximum impact and tsunami inundation for use by local communities in education and evacuation map development. Hazard assessment studies conducted for Los Angeles, San Francisco, Crescent City, Hilo, and Apra Harbor are combined with results of tsunami forecast model development at each of seventy-five locations. Complete hazard assessment, identifies every possible tsunami variation from a pre-computed propagation database. Study results indicate that the Eastern Aleutian Islands and Alaska are the most likely regions to produce the largest impact on the West Coast of the United States, while the East Philippines and Mariana trench regions impact Apra Harbor, Guam. Hawaii appears to be impacted equally from South America, Alaska and the Kuril Islands.
Application of physical erosion modelling to derive off-site muddy flood hazard
NASA Astrophysics Data System (ADS)
Annika Arevalo, Sarah; Schmidt, Jürgen
2015-04-01
Muddy floods are local inundation events after heavy rain storms. They occur inside watersheds before the runoff reaches a river. The sediment is eroded from agricultural fields and transported with the surface runoff into adjacent residential areas. The environment where muddy floods occur is very small scaled. The damages related to muddy floods are caused by the runoff-water (flooded houses and cellars) and the transported sediment that is deposited on infrastructure and private properties. There are a variety of factors that drive the occurrence of muddy floods. The spatial extend is rather small and the distribution is very heterogeneous. This makes the prediction of the precise locations that are endangered by muddy flooding a challenge. The aim of this investigation is to identify potential hazard areas that might suffer muddy flooding out of modelled soil erosion data. For the German state of Saxony there is a modelled map of soil erosion and particle transport available. The model applied is EROSION 3D. The spatial resolution is a 20 m raster and the conditions assumed are a 10 year rainfall event on uncovered agricultural soils. A digital landuse map is edified, containing the outer borders of potential risk elements (residential and industrial areas, streets, railroads, etc.) that can be damaged by muddy flooding. The landuse map is merged with the transported sediment map calculated with EROSION 3D. The result precisely depicts the locations where high amounts of sediments might be transported into urban areas under worst case conditions. This map was validated with observed muddy flood events that proved to coincide very well with areas predicted to have a potentially high sediment input.
NASA Astrophysics Data System (ADS)
Mercado, A., Jr.
2015-12-01
The island of Puerto Rico is not only located in the so-called Caribbean hurricane alley, but is also located in a tsunami prone region. And both phenomena have affected the island. For the past few years we have undergone the task of upgrading the available coastal flood maps due to storm surges and tsunamis. This has been done taking advantage of new Lidar-derived, high resolution, topography and bathymetry and state-of-the-art models (MOST for tsunamis and ADCIRC/SWAN for storm surges). The tsunami inundation maps have been converted to evacuation maps. In tsunamis we are also working in preparing hazard maps due to tsunami currents inside ports, bays, and marinas. The storm surge maps include two scenarios of sea level rise: 0.5 and 1.0 m above Mean High Water. All maps have been adopted by the Puerto Rico State Emergency Management Agency, and are publicly available through the Internet. It is the purpose of this presentation to summarize how it has been done, the spin-off applications they have generated, and how we plan to improve coastal flooding predictions.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
Examining school-based bullying interventions using multilevel discrete time hazard modeling.
Ayers, Stephanie L; Wagaman, M Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E C
2012-10-01
Although schools have been trying to address bullying by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K - 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR = 0.65, p < .01) and Loss of Privileges (AOR = 0.71, p < .10) were significant in reducing the rate of the reoccurrence of bullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students' mesosystems as well as utilizing disciplinary strategies that take into consideration student's microsystem roles.
Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling
Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.
2014-01-01
Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, p<.01) and Loss of Privileges (AOR=0.71, p<.10) were significant in reducing the rate of the reoccurrence of bullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Numerical modeling of marine Gravity data for tsunami hazard zone mapping
NASA Astrophysics Data System (ADS)
Porwal, Nipun
2012-07-01
Tsunami is a series of ocean wave with very high wavelengths ranges from 10 to 500 km. Therefore tsunamis act as shallow water waves and hard to predict from various methods. Bottom Pressure Recorders of Poseidon class considered as a preeminent method to detect tsunami waves but Acoustic Modem in Ocean Bottom Pressure (OBP) sensors placed in the vicinity of trenches having depth of more than 6000m fails to propel OBP data to Surface Buoys. Therefore this paper is developed for numerical modeling of Gravity field coefficients from Bureau Gravimetric International (BGI) which do not play a central role in the study of geodesy, satellite orbit computation, & geophysics but by mathematical transformation of gravity field coefficients using Normalized Legendre Polynomial high resolution ocean bottom pressure (OBP) data is generated. Real time sea level monitored OBP data of 0.3° by 1° spatial resolution using Kalman filter (kf080) for past 10 years by Estimating the Circulation and Climate of the Ocean (ECCO) has been correlated with OBP data from gravity field coefficients which attribute a feasible study on future tsunami detection system from space and in identification of most suitable sites to place OBP sensors near deep trenches. The Levitus Climatological temperature and salinity are assimilated into the version of the MITGCM using the ad-joint method to obtain the sea height segment. Then TOPEX/Poseidon satellite altimeter, surface momentum, heat, and freshwater fluxes from NCEP reanalysis product and the dynamic ocean topography DOT_DNSCMSS08_EGM08 is used to interpret sea-bottom elevation. Then all datasets are associated under raster calculator in ArcGIS 9.3 using Boolean Intersection Algebra Method and proximal analysis tools with high resolution sea floor topographic map. Afterward tsunami prone area and suitable sites for set up of BPR as analyzed in this research is authenticated by using Passive microwave radiometry system for Tsunami Hazard Zone
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
Numerical Stress Field Modelling: from geophysical observations toward volcano hazard assessment
NASA Astrophysics Data System (ADS)
Currenti, Gilda; Coco, Armando; Privitera, Emanuela
2015-04-01
. Numerical results show the contribution of groundwater head gradients associated with topographically induced flow and pore-pressure changes, providing a quantitative estimate for deformation and failure of volcano edifice. The comparison between the predictions of the model and the observations can provide valuable insights about the stress state of the volcano and, hence, about the likelihood of an impending eruption. This innovative approach opens up new perspectives in geodetic inverse modelling and poses the basis for future development in a volcano hazard assessment based on a critical combination of geophysical observations and numerical modelling.
Regional ash fall hazard II: Asia-Pacific modelling results and implications
NASA Astrophysics Data System (ADS)
Jenkins, Susanna; McAneney, John; Magill, Christina; Blong, Russell
2012-09-01
In a companion paper (this volume), the authors propose a methodology for assessing ash fall hazard on a regional scale. In this study, the methodology is applied to the Asia-Pacific region, determining the hazard from 190 volcanoes to over one million square kilometre of urban area. Ash fall hazard is quantified for each square kilometre grid cell of urban area in terms of the annual exceedance probability (AEP), and its inverse, the average recurrence interval (ARI), for ash falls exceeding 1, 10 and 100 mm. A surrogate risk variable, the Population-Weighted Hazard Score: the product of AEP and population density, approximates the relative risk for each grid cell. Within the Asia-Pacific region, urban areas in Indonesia are found to have the highest levels of hazard and risk, while Australia has the lowest. A clear demarcation emerges between the hazard in countries close to and farther from major subduction plate boundaries, with the latter having ARIs at least 2 orders of magnitude longer for the same thickness thresholds. Countries with no volcanoes, such as North Korea and Malaysia, also face ash falls from volcanoes in neighbouring countries. Ash falls exceeding 1 mm are expected to affect more than one million people living in urban areas within the study region; in Indonesia, Japan and the Philippines, this situation could occur with ARIs less than 40 years.
Combining observations and model simulations to reduce the hazard of Etna volcanic ash plumes
NASA Astrophysics Data System (ADS)
Scollo, Simona; Boselli, Antonella; Coltelli, Mauro; Leto, Giuseppe; Pisani, Gianluca; Prestifilippo, Michele; Spinelli, Nicola; Wang, Xuan; Zanmar Sanchez, Ricardo
2014-05-01
Etna is one of the most active volcanoes in the world with a recent activity characterized by powerful lava fountains that produce several kilometres high eruption columns and disperse volcanic ash in the atmosphere. It is well known that, to improve the volcanic ash dispersal forecast of an ongoing explosive eruption, input parameters used by volcanic ash dispersal models should be measured during the eruption. In this work, in order to better quantify the volcanic ash dispersal, we use data from the video-surveillance system of Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo, and from the lidar system together with a volcanic ash dispersal model. In detail, the visible camera installed in Catania, 27 km from the vent is able to evaluate the evolution of column height with time. The Lidar, installed at the "M.G. Fracastoro" astrophysical observatory (14.97° E, 37.69° N) of the Istituto Nazionale di Astrofisica in Catania, located at a distance of 7 km from the Etna summit craters, uses a frequency doubled Nd:YAG laser source operating at a 532-nm wavelength, with a repetition rate of 1 kHz. Backscattering and depolarization values measured by the Lidar system can give, with a certain degree of uncertainty, an estimation of volcanic ash concentration in atmosphere. The 12 August 2011 activity is considered a perfect test case because volcanic plume was retrieved by both camera and Lidar. We evaluated the mass eruption rate from the column height and used best fit procedures comparing simulated volcanic ash concentrations with those extracted by the Lidar data. During this event, powerful lava fountains were well visible at about 08:30 GMT and a sustained eruption column was produced since about 08:55 GMT. Ash emission completely ceased around 11:30 GMT. The proposed approach is an attempt to produce more robust ash dispersal forecasts reducing the hazard to air traffic during Etna volcanic crisis.
A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard
NASA Astrophysics Data System (ADS)
Alaeddine, H.; Serrhini, K.; Maizia, M.
2015-03-01
Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.
Landslide tsunami hazard in New South Wales, Australia: novel observations from 3D modelling
NASA Astrophysics Data System (ADS)
Power, Hannah; Clarke, Samantha; Hubble, Tom
2015-04-01
This paper examines the potential of tsunami inundation generated from two case study sites of submarine mass failures on the New South Wales coast of Australia. Two submarine mass failure events are investigated: the Bulli Slide and the Shovel Slide. Both slides are located approximately 65 km southeast of Sydney and 60 km east of the township of Wollongong. The Bulli Slide (~20 km3) and the Shovel Slide (7.97 km3) correspond to the two largest identified erosional surface submarine landslides scars of the NSW continental margin (Glenn et al. 2008; Clarke 2014) and represent examples of large to very large submarine landslide scars. The Shovel Slide is a moderately thick (80-165 m), moderately wide to wide (4.4 km) slide, and is located in 880 m water depth; and the Bulli Slide is an extremely thick (200-425 m), very wide (8.9 km) slide, and is located in 1500 m water depth. Previous work on the east Australian margin (Clarke et al., 2014) and elsewhere (Harbitz et al., 2013) suggests that submarine landslides similar to the Bulli Slide or the Shovel Slide are volumetrically large enough and occur at shallow enough water depths (400-2500 m) to generate substantial tsunamis that could cause widespread damage on the east Australian coast and threaten coastal communities (Burbidge et al. 2008; Clarke 2014; Talukder and Volker 2014). Currently, the tsunamogenic potential of these two slides has only been investigated using 2D modelling (Clarke 2014) and to date it has been difficult to establish the onshore tsunami surge characteristics for the submarine landslides with certainty. To address this knowledge gap, the forecast inundation as a result of these two mass failure events was investigated using a three-dimensional model (ANUGA) that predicts water flow resulting from natural hazard events such as tsunami (Nielsen et al., 2005). The ANUGA model solves the two-dimensional shallow water wave equations and accurately models the process of wetting and drying thus
Lima, M Lourdes; Romanelli, Asunción; Massone, Héctor E
2013-06-01
This paper gives an account of the implementation of a decision support system for assessing aquifer pollution hazard and prioritizing subwatersheds for groundwater resources management in the southeastern Pampa plain of Argentina. The use of this system is demonstrated with an example from Dulce Stream Basin (1,000 km(2) encompassing 27 subwatersheds), which has high level of agricultural activities and extensive available data regarding aquifer geology. In the logic model, aquifer pollution hazard is assessed as a function of two primary topics: groundwater and soil conditions. This logic model shows the state of each evaluated landscape with respect to aquifer pollution hazard based mainly on the parameters of the DRASTIC and GOD models. The decision model allows prioritizing subwatersheds for groundwater resources management according to three main criteria including farming activities, agrochemical application, and irrigation use. Stakeholder participation, through interviews, in combination with expert judgment was used to select and weight each criterion. The resulting subwatershed priority map, by combining the logic and decision models, allowed identifying five subwatersheds in the upper and middle basin as the main aquifer protection areas. The results reasonably fit the natural conditions of the basin, identifying those subwatersheds with shallow water depth, loam-loam silt texture soil media and pasture land cover in the middle basin, and others with intensive agricultural activity, coinciding with the natural recharge area to the aquifer system. Major difficulties and some recommendations of applying this methodology in real-world situations are discussed.
Dankers, Rutger; Arnell, Nigel W; Clark, Douglas B; Falloon, Pete D; Fekete, Balázs M; Gosling, Simon N; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik
2014-03-04
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
NASA Astrophysics Data System (ADS)
Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik
2014-03-01
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
NASA Astrophysics Data System (ADS)
Zhang, Baoqing; Wu, Pute; Zhao, Xining; Wang, Yubao; Gao, Xiaodong; Cao, Xinchun
2013-10-01
Drought is a complex natural hazard that is poorly understood and difficult to assess. This paper describes a VIC-PDSI model approach to understanding drought in which the Variable Infiltration Capacity (VIC) Model was combined with the Palmer Drought Severity Index (PDSI). Simulated results obtained using the VIC model were used to replace the output of the more conventional two-layer bucket-type model for hydrological accounting, and a two-class-based procedure for calibrating the characteristic climate coefficient ( K j ) was introduced to allow for a more reliable computation of the PDSI. The VIC-PDSI model was used in conjunction with GIS technology to create a new drought assessment index (DAI) that provides a comprehensive overview of drought duration, intensity, frequency, and spatial extent. This new index was applied to drought hazard assessment across six subregions of the whole Loess Plateau. The results show that the DAI over the whole Loess Plateau ranged between 11 and 26 (the greater value of the DAI means the more severe of the drought hazard level). The drought hazards in the upper reaches of Yellow River were more severe than that in the middle reaches. The drought prone regions over the study area were mainly concentrated in Inner Mongolian small rivers, Zuli and Qingshui Rivers basin, while the drought hazards in the drainage area between Hekouzhen-Longmen and Weihe River basin were relatively mild during 1971-2010. The most serious drought vulnerabilities were associated with the area around Lanzhou, Zhongning, and Yinchuan, where the development of water-saving irrigation is the most direct and effective way to defend against and reduce losses from drought. For the relatively humid regions, it will be necessary to establish the rainwater harvesting systems, which could help to relieve the risk of water shortage and guarantee regional food security. Due to the DAI considers the multiple characteristic of drought duration, intensity, frequency
Cramer, C.H.
2006-01-01
The Mississippi embayment, located in the central United States, and its thick deposits of sediments (over 1 km in places) have a large effect on earthquake ground motions. Several previous studies have addressed how these thick sediments might modify probabilistic seismic-hazard maps. The high seismic hazard associated with the New Madrid seismic zone makes it particularly important to quantify the uncertainty in modeling site amplification to better represent earthquake hazard in seismic-hazard maps. The methodology of the Memphis urban seismic-hazard-mapping project (Cramer et al., 2004) is combined with the reference profile approach of Toro and Silva (2001) to better estimate seismic hazard in the Mississippi embayment. Improvements over previous approaches include using the 2002 national seismic-hazard model, fully probabilistic hazard calculations, calibration of site amplification with improved nonlinear soil-response estimates, and estimates of uncertainty. Comparisons are made with the results of several previous studies, and estimates of uncertainty inherent in site-amplification modeling for the upper Mississippi embayment are developed. I present new seismic-hazard maps for the upper Mississippi embayment with the effects of site geology incorporating these uncertainties.
NASA Astrophysics Data System (ADS)
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations
The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.
2005-12-01
The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy
... such as lead and mercury Chemicals such as pesticides Cigarettes Some viruses Alcohol For men, a reproductive hazard can affect the sperm. For a woman, a reproductive hazard can cause different effects during pregnancy, depending on when she is exposed. ...
BOUMAN, Peter; MENG, Xiao-Li; DIGNAM, James; DUKIĆ, Vanja
2014-01-01
In multicenter studies, one often needs to make inference about a population survival curve based on multiple, possibly heterogeneous survival data from individual centers. We investigate a flexible Bayesian method for estimating a population survival curve based on a semiparametric multiresolution hazard model that can incorporate covariates and account for center heterogeneity. The method yields a smooth estimate of the survival curve for “multiple resolutions” or time scales of interest. The Bayesian model used has the capability to accommodate general forms of censoring and a priori smoothness assumptions. We develop a model checking and diagnostic technique based on the posterior predictive distribution and use it to identify departures from the model assumptions. The hazard estimator is used to analyze data from 110 centers that participated in a multicenter randomized clinical trial to evaluate tamoxifen in the treatment of early stage breast cancer. Of particular interest are the estimates of center heterogeneity in the baseline hazard curves and in the treatment effects, after adjustment for a few key clinical covariates. Our analysis suggests that the treatment effect estimates are rather robust, even for a collection of small trial centers, despite variations in center characteristics. PMID:25620824
NASA Astrophysics Data System (ADS)
Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego
2014-05-01
The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
NASA Astrophysics Data System (ADS)
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
NASA Astrophysics Data System (ADS)
Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele
2016-04-01
Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters
NASA Astrophysics Data System (ADS)
Schneider, D.; Huggel, C.; Cochachin, A.; Guillén, S.; García, J.
2014-01-01
Recent warming has had enormous impacts on glaciers and high-mountain environments. Hazards have changed or new ones have emerged, including those from glacier lakes that form as glaciers retreat. The Andes of Peru have repeatedly been severely impacted by glacier lake outburst floods in the past. An important recent event occurred in the Cordillera Blanca in 2010 when an ice avalanche impacted a glacier lake and triggered an outburst flood that affected the downstream communities and city of Carhuaz. In this study we evaluate how such complex cascades of mass movement processes can be simulated coupling different physically-based numerical models. We furthermore develop an approach that allows us to elaborate corresponding hazard maps according to existing guidelines for debris flows and based on modelling results and field work.
Modeling Lahar Hazard Zones for Eruption-Generated Lahars from Lassen Peak, California
NASA Astrophysics Data System (ADS)
Robinson, J. E.; Clynne, M. A.
2010-12-01
Lassen Peak, a high-elevation, seasonally snow-covered peak located within Lassen Volcanic National Park, has lahar deposits in several drainages that head on or near the lava dome. This suggests that these drainages are susceptible to future lahars. The majority of the recognized lahar deposits are related to the May 19 and 22, 1915 eruptions of Lassen Peak. These small-volume eruptions generated lahars and floods when an avalanche of snow and hot rock, and a pyroclastic flow moved across the snow-covered upper flanks of the lava dome. Lahars flowed to the north down Lost Creek and Hat Creek. In Lost Creek, the lahars flowed up to 16 km downstream and deposited approximately 8.3 x 106 m3 of sediment. This study uses geologic mapping of the 1915 lahar deposits as a guide for LAHARZ modeling to assist in the assessment of present-day susceptibility for lahars in drainages heading on Lassen Peak. The LAHARZ model requires a Height over Length (H/L) energy cone controlling the initiation point of a lahar. We chose a H/L cone with a slope of 0.3 that intersects the earth’s surface at the break in slope at the base of the volcanic dome. Typically, the snow pack reaches its annual maximum by May. Average and maximum May snow-water content, a depth of water equal to 2.1 m and 3.5 m respectively, were calculated from a local snow gauge. A potential volume for individual 1915 lahars was calculated using the deposit volume, the snow-water contents, and the areas stripped of snow by the avalanche and pyroclastic flow. The calculated individual lahars in Lost Creek ranged in size from 9 x 106 m3 to 18.4 x 106 m3. These volumes modeled in LAHARZ matched the 1915 lahars remarkably well, with the modeled flows ending within 4 km of the mapped deposits. We delineated six drainage basins that head on or near Lassen Peak with the highest potential for lahar hazards: Lost Creek, Hat Creek, Manzanita Creek, Mill Creek, Warner Creek, and Bailey Creek. We calculated the area of each
Modeling and forecasting tephra hazards at Redoubt Volcano, Alaska, during 2009 unrest and eruption
NASA Astrophysics Data System (ADS)
Mastin, L. G.; Denlinger, R. P.; Wallace, K. L.; Schaefer, J. R.
2009-12-01
In late 2008, Redoubt Volcano, on the west coast of Alaska’s Cook Inlet, began a period of unrest that culminated in more than 19 small tephra-producing events between March 19 and April 4, 2009, followed by growth of a lava dome whose volume now exceeds 70 million cubic meters. The explosive events lasted from <1 to 31 minutes, sent tephra columns to heights of 19 km asl, and emitted dense-rock (DRE) tephra volumes up to several million cubic meters. Tephra fall affected transportation and infrastructure throughout Cook Inlet, including the Anchorage metropolitan area. The months of unrest that preceded the first explosive event allowed us to develop tools to forecast tephra hazards. As described in an accompanying abstract, colleagues at the University of Pisa produced automated, daily tephra-fall forecast maps using the 3-D VOL-CALPUFF model with input scenarios that represented likely event sizes and durations. Tephra-fall forecast maps were also generated every six hours for hypothetical events of 10M m3 volume DRE using the 2-D model ASHFALL, and relationships between hypothetical plume height and eruption rate were evaluated four times daily under then-current atmospheric conditions using the program PLUMERIA. Eruptive deposits were mapped and isomass contours constructed for the two largest events, March 24 (0340-0355Z) and April 4 (1358-1429Z), which produced radar-determined plume heights of 18.3 and 15.2 km asl (~15.6 and 12.5 km above the vent), and tephra volumes (DRE) of 6.3M and 3.1M m3, respectively. For the volumetric eruption rates calculated from mapped erupted volume and seismic duration (V=6.2×103 and 1.7×103 m3/s DRE), measured plume heights H above the vent fall within 10% of the empirical best-fit curve H=1.67V0.259 published in the book Volcanic Plumes by Sparks et al. (1997, eq. 5.1). The plume heights are slightly higher than (but still within 13% of) the 14.6 and 11.1 km predicted by PLUMERIA under the existing atmospheric conditions
Stanley, Dal; Villaseñor, Antonio; Benz, Harley
1999-01-01
The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Tumeo, M.A.
1993-02-01
Because of the recognized risks associated with the transport of hazardous substances (both useable materials and wastes), federal, state, and local agencies are under growing pressure to minimize the risk of spills and accidental releases of dangerous substances and in the event of a spill event, to protect the public from adverse effects. To meet this need, a preliminary computer model is presented for use in incident management in urban and rural traffic operations. The model is in the Macintosh Hypercard format and allows the user to assess the risks associated with transport of hazardous substances, including the potential of an accidental release and risks to the public and the environment should a release occur. As part of the long-term model development strategy, the preliminary model was tested at a workshop of Alaska Department of Transportation personnel and researchers from the University of Alaska Fairbanks Transportation Research Center. During the workshop, participants were asked to critique the model structure, user interface, and utility. Specific suggestions were solicited on how the model could be strengthened and improved. An overall rating sheet was filled out by each participant to provide a quasi-quantitative assessment of the model as a management tool. The preliminary model will serve as the core for the development and refinement of a more sophisticated risk analysis model.
NASA Astrophysics Data System (ADS)
Khare, S.; Bonazzi, A.; Mitas, C.; Jewson, S.
2014-08-01
In this paper, we present a novel framework for modelling clustering in natural hazard risk models. The framework we present is founded on physical principles where large-scale oscillations in the physical system is the source of non-Poissonian (clustered) frequency behaviour. We focus on a particular mathematical implementation of the "Super-Cluster" methodology that we introduce. This mathematical framework has a number of advantages including tunability to the problem at hand, as well as the ability to model cross-event correlation. Using European windstorm data as an example, we provide evidence that historical data show strong evidence of clustering. We then develop Poisson and clustered simulation models for the data, demonstrating clearly the superiority of the clustered model which we have implemented using the Poisson-Mixtures approach. We then discuss the implications of including clustering in models of prices on catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of new insights into the impact clustering has on modelled catXL contract prices. The simple model presented in this paper provides an insightful starting point for practicioners of natural hazard risk modelling.
Wang, Junjie; He, Jiangtao; Chen, Honghan
2012-08-15
Groundwater contamination risk assessment is an effective tool for groundwater management. Most existing risk assessment methods only consider the basic contamination process based upon evaluations of hazards and aquifer vulnerability. In view of groundwater exploitation potentiality, including the value of contamination-threatened groundwater could provide relatively objective and targeted results to aid in decision making. This study describes a groundwater contamination risk assessment method that integrates hazards, intrinsic vulnerability and groundwater value. The hazard harmfulness was evaluated by quantifying contaminant properties and infiltrating contaminant load, the intrinsic aquifer vulnerability was evaluated using a modified DRASTIC model and the groundwater value was evaluated based on groundwater quality and aquifer storage. Two groundwater contamination risk maps were produced by combining the above factors: a basic risk map and a value-weighted risk map. The basic risk map was produced by overlaying the hazard map and the intrinsic vulnerability map. The value-weighted risk map was produced by overlaying the basic risk map and the groundwater value map. Relevant validation was completed by contaminant distributions and site investigation. Using Beijing Plain, China, as an example, thematic maps of the three factors and the two risks were generated. The thematic maps suggested that landfills, gas stations and oil depots, and industrial areas were the most harmful potential contamination sources. The western and northern parts of the plain were the most vulnerable areas and had the highest groundwater value. Additionally, both the basic and value-weighted risk classes in the western and northern parts of the plain were the highest, indicating that these regions should deserve the priority of concern. Thematic maps should be updated regularly because of the dynamic characteristics of hazards. Subjectivity and validation means in assessing the
Saving Money Using Proportional Reasoning
ERIC Educational Resources Information Center
de la Cruz, Jessica A.; Garney, Sandra
2016-01-01
It is beneficial for students to discover intuitive strategies, as opposed to the teacher presenting strategies to them. Certain proportional reasoning tasks are more likely to elicit intuitive strategies than other tasks. The strategies that students are apt to use when approaching a task, as well as the likelihood of a student's success or…
Social Justice and Proportional Reasoning
ERIC Educational Resources Information Center
Simic-Muller, Ksenija
2015-01-01
Ratio and proportional reasoning tasks abound that have connections to real-world situations. Examples in this article demonstrate how textbook tasks can easily be transformed into authentic real-world problems that shed light on issues of equity and fairness, such as population growth and crime rates. A few ideas are presented on how teachers can…
Understanding Proportional Reasoning for Teaching
ERIC Educational Resources Information Center
Kastberg, Signe E.; D'Ambrosio, Beatriz; Lynch-Davis, Kathleen
2012-01-01
Proportional reasoning is an important cornerstone in children's mathematical development. This sort of reasoning has been shown to develop across the early years of schooling (ages 8 to 10) through the middle years (ages 11-14). In the early years, children tend to use additive reasoning to generate solutions to problems, while later comparisons…
Proportional Reasoning with a Pyramid
ERIC Educational Resources Information Center
Mamolo, Ami; Sinclair, Margaret; Whiteley, Walter J.
2011-01-01
Proportional reasoning pops up in math class in a variety of places, such as while making scaled drawings; finding equivalent fractions; converting units of measurement; comparing speeds, prices, and rates; and comparing lengths, areas, and volume. Students need to be exposed to a variety of representations to develop a sound understanding of this…
Characterizing the danger of in-channel river hazards using LIDAR and a 2D hydrodynamic model
NASA Astrophysics Data System (ADS)
Strom, M. A.; Pasternack, G. B.
2014-12-01
Despite many injuries and deaths each year worldwide, no analytically rigorous attempt exists to characterize and quantify the dangers to boaters, swimmers, fishermen, and other river enthusiasts. While designed by expert boaters, the International Scale of River Difficulty provides a whitewater classification that uses qualitative descriptions and subjective scoring. The purpose of this study was to develop an objective characterization of in-channel hazard dangers across spatial scales from a single boulder to an entire river segment for application over a wide range of discharges and use in natural hazard assessment and mitigation, recreational boating safety, and river science. A process-based conceptualization of river hazards was developed, and algorithms were programmed in R to quantify the associated dangers. Danger indicators included the passage proximity and reaction time posed to boats and swimmers in a river by three hazards: emergent rocks, submerged rocks, and hydraulic jumps or holes. The testbed river was a 12.2 km mixed bedrock-alluvial section of the upper South Yuba River between Lake Spaulding and Washington, CA in the Sierra Mountains. The segment has a mean slope of 1.63%, with 8 reaches varying from 1.07% to 3.30% slope and several waterfalls. Data inputs to the hazard analysis included sub-decimeter aerial color imagery, airborne LIDAR of the river corridor, bathymetric data, flow inputs, and a stage-discharge relation for the end of the river segment. A key derived data product was the location and configuration of boulders and boulder clusters as these were potential hazards. Two-dimensional hydrodynamic modeling was used to obtain the meter-scale spatial pattern of depth and velocity at discharges ranging from baseflow to modest flood stages. Results were produced for four discharges and included the meter-scale spatial pattern of the passage proximity and reaction time dangers for each of the three hazards investigated. These results
NASA Astrophysics Data System (ADS)
Claessens, L.; Knapen, A.; Kitutu, M. G.; Poesen, J.; Deckers, J. A.
2007-10-01
In this study, the LAPSUS-LS landslide model, together with a digital terrain analysis of topographic attributes, is used as a spatially explicit tool to simulate recent shallow landslides in Manjiya County on the Ugandan slopes of Mount Elgon. Manjiya County is a densely populated mountainous area where landslides have been reported since the beginning of the twentieth century. To better understand the causal factors of landsliding, 81 recent landslides have been mapped and investigated. Through statistical analysis it was shown that steep concave slopes, high rainfall, soil properties and layering as well as human interference were the main factors responsible for landslides in the study area. LAPSUS-LS is used to construct a landslide hazard map, and to confirm or reject the main factors for landsliding in the area. The model is specifically designed for the analysis of shallow landslide hazard by combining a steady state hydrologic model with a deterministic infinite slope stability model. In addition, soil redistribution algorithms can be applied, whereby erosion and sedimentation by landsliding can be visualized and quantified by applying a threshold critical rainfall scenario. The model is tested in the Manjiya study area for its ability to delineate zones that are prone to shallow landsliding in general and to group the recent landslides into a specific landslide hazard category. The digital terrain analysis confirms most of the causal topographic factors for shallow landsliding in the study area. In general, shallow landslides occur at a relatively large distance from the water divide, on the transition between steep concave and more gentle convex slope positions, which points to concentration of (sub)surface flow as the main hydrological triggering mechanism. In addition, LAPSUS-LS is capable to group the recent shallow landslides in a specific landslide hazard class (critical rainfall values of 0.03-0.05 m day - 1 ). By constructing a landslide hazard
NASA Astrophysics Data System (ADS)
Kourgialas, N. N.; Karatzas, G. P.
2013-10-01
A modelling system for the estimation of flash flood flow characteristics and sediment transport is developed in this study. The system comprises of three components: (a) a modelling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modelling is the Manning's coefficient, an indicator of the channel resistance which is directly depended on riparian vegetation changes. Riparian vegetation effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modelling system is used to evaluate and illustrate the flood hazard for different cutting riparian vegetation scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, an optimal selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood prone areas. The proposed methodology was applied to the downstream part of a small mediterranean river basin in Crete, Greece.
NASA Astrophysics Data System (ADS)
Kourgialas, N. N.; Karatzas, G. P.
2014-03-01
A modeling system for the estimation of flash flood flow velocity and sediment transport is developed in this study. The system comprises three components: (a) a modeling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modeling is the Manning's coefficient, an indicator of the channel resistance which is directly dependent on riparian vegetation changes. Riparian vegetation's effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed-cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modeling system is used to evaluate and illustrate the flood hazard for different riparian vegetation cutting scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, a well-balanced selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood-prone areas. The proposed methodology was applied to the downstream part of a small Mediterranean river basin in Crete, Greece.
Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko
2016-10-01
The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.
NASA Astrophysics Data System (ADS)
Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko
2016-10-01
The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.
NASA Astrophysics Data System (ADS)
Keith, A. M.; Weigel, A. M.; Rivas, J.
2014-12-01
Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.
A Model (Formula) for Deriving A Hazard Index of Rail-Highway Grade Crossings.
ERIC Educational Resources Information Center
Coburn, James Minton
The purpose of this research was to compile data for use as related information in the education of drivers, and to derive a formula for computing a hazard index for rail-highway intersections. Data for the study were compiled from: (1) all crossings on which field data were collected, (2) reports of 642 accidents, and (3) data collected from…
Hazard Zonation at Mount Adams, Washington based on Edifice and Flank Stability Modeling
NASA Astrophysics Data System (ADS)
Bowman, S. D.; Watters, R. J.
2002-12-01
Collapse of the edifice [summit] and flanks of volcanoes is common worldwide, including the Cascade Range. Many of these failures have transformed into devastating debris flows that may travel hundreds of miles from their source area and have killed or injured hundreds of thousands of people. Despite the danger posed by these failures and the incipient debris flows, limited geotechnical data exists to quantify hazards from edifice and flank failure. Recent field work and investigation at Mount Adams, Washington focused on developing and refining a methodology for characterizing volcanic stability for geologic hazard analysis. This methodology may be applied at other volcanoes worldwide. Geotechnical data, including discontinuity and strength characteristics, Rock Mass Rating (RMR), point load index, direct shear, unconfined compression, and triaxial data were used to identify sectors based upon common geotechnical and geologic characteristics. The geotechnical information collected at Mount Adams adds to the limited data available worldwide and provides general strength ranges for use in initial stability studies at other volcanoes. In addition, a new point load index device was developed for use at high elevation and remote locations. Stability of each identified sector was analyzed using limit equilibrium methods, based upon collected geotechnical and geologic data. Three previous failures were backanalysed to determine strength characteristics at the time of failure. Areas of immediate instability include The Castle and the Avalanche Glacier Headwall. Backanalysis of the Trout Lake Mudflow, which formed the Avalanche Glacier Headwall, suggests a seismic or eruption triggering mechanism. Stability analysis resulted in a failure hazard map quantifying the hazard in each sector from slope failure. This hazard map in combination with other data may be used by agencies and organizations involved in land-use planning in the Mount Adams area to protect lives and
ERIC Educational Resources Information Center
De Bock, Dirk; Van Dooren, Wim; Verschaffel, Lieven
2015-01-01
We investigated students' understanding of proportional, inverse proportional, and affine functions and the way this understanding is affected by various external representations. In a first study, we focus on students' ability to model textual descriptions of situations with different kinds of representations of proportional, inverse…
NASA Astrophysics Data System (ADS)
Zarekarizi, M.; Moradkhani, H.
2015-12-01
Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.
Johnson, Branden B; Hallman, William K; Cuite, Cara L
2015-03-01
Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development.
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, L. K.; Vogel, R. M.
2015-11-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-11
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.« less
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-11
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Diver, Richard B., Jr.; Ghanbari, Cheryl M.; Ho, Clifford Kuofei
2010-04-01
With growing numbers of concentrating solar power systems being designed and developed, glint and glare from concentrating solar collectors and receivers is receiving increased attention as a potential hazard or distraction for motorists, pilots, and pedestrians. This paper provides analytical methods to evaluate the irradiance originating from specularly and diffusely reflecting sources as a function of distance and characteristics of the source. Sample problems are provided for both specular and diffuse sources, and validation of the models is performed via testing. In addition, a summary of safety metrics is compiled from the literature to evaluate the potential hazards of calculated irradiances from glint and glare. Previous safety metrics have focused on prevention of permanent eye damage (e.g., retinal burn). New metrics used in this paper account for temporary flash blindness, which can occur at irradiance values several orders of magnitude lower than the irradiance values required for irreversible eye damage.
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In
NASA Astrophysics Data System (ADS)
Fitzgerald, R. H.; Tsunematsu, K.; Kennedy, B. M.; Breard, E. C. P.; Lube, G.; Wilson, T. M.; Jolly, A. D.; Pawson, J.; Rosenberg, M. D.; Cronin, S. J.
2014-10-01
On 6 August, 2012, Upper Te Maari Crater, Tongariro volcano, New Zealand, erupted for the first time in over one hundred years. Multiple vents were activated during the hydrothermal eruption, ejecting blocks up to 2.3 km and impacting ~ 2.6 km of the Tongariro Alpine Crossing (TAC) hiking track. Ballistic impact craters were mapped to calibrate a 3D ballistic trajectory model for the eruption. This was further used to inform future ballistic hazard. Orthophoto mapping revealed 3587 impact craters with a mean diameter of 2.4 m. However, field mapping of accessible regions indicated an average of at least four times more observable impact craters and a smaller mean crater diameter of 1.2 m. By combining the orthophoto and ground-truthed impact frequency and size distribution data, we estimate that approximately 13,200 ballistic projectiles were generated during the eruption. The 3D ballistic trajectory model and a series of inverse models were used to constrain the eruption directions, angles and velocities. When combined with eruption observations and geophysical observations, the model indicates that the blocks were ejected in five variously directed eruption pulses, in total lasting 19 s. The model successfully reproduced the mapped impact distribution using a mean initial particle velocity of 200 m/s with an accompanying average gas flow velocity over a 400 m radius of 150 m/s. We apply the calibrated model to assess ballistic hazard from the August eruption along the TAC. By taking the field mapped spatial density of impacts and an assumption that an average ballistic impact will cause serious injury or death (casualty) over an 8 m2 area, we estimate that the probability of casualty ranges from 1% to 16% along the affected track (assuming an eruption during the time of exposure). Future ballistic hazard and probabilities of casualty along the TAC are also assessed through application of the calibrated model. We model a magnitude larger eruption and illustrate
Models of magma-aquifer interactions and their implications for hazard assessment
NASA Astrophysics Data System (ADS)
Strehlow, Karen; Gottsmann, Jo; Tumi Gudmundsson, Magnús
2014-05-01
Interactions of magmatic and hydrological systems are manifold, complex and poorly understood. On the one side they bear a significant hazard potential in the form of phreatic explosions or by causing "dry" effusive eruptions to turn into explosive phreatomagmatic events. On the other side, they can equally serve to reduce volcanic risk, as resulting geophysical signals can help to forecast eruptions. It is therefore necessary to put efforts towards answering some outstanding questions regarding magma - aquifer interactions. Our research addresses these problems from two sides. Firstly, aquifers respond to magmatic activity and they can also become agents of unrest themselves. Therefore, monitoring the hydrology can provide a valuable window into subsurface processes in volcanic areas. Changes in temperature and strain conditions, seismic excitation or the injection of magmatic fluids into hydrothermal systems are just a few of the proposed processes induced by magmatic activity that affect the local hydrology. Interpretations of unrest signals as groundwater responses are described for many volcanoes and include changes in water table levels, changes in temperature or composition of hydrothermal waters and pore pressure-induced ground deformation. Volcano observatories can track these hydrological effects for example with potential field investigations or the monitoring of wells. To fully utilise these indicators as monitoring and forecasting tools, however, it is necessary to improve our understanding of the ongoing mechanisms. Our hydrogeophysical study uses finite element analysis to quantitatively test proposed mechanisms of aquifer excitation and the resultant geophysical signals. Secondly, volcanic activity is influenced by the presence of groundwater, including phreatomagmatic and phreatic eruptions. We focus here on phreatic explosions at hydrothermal systems. At least two of these impulsive events occurred in 2013: In August at the Icelandic volcano
NASA Astrophysics Data System (ADS)
Stancanelli, L. M.; Peres, D. J.; Cavallaro, L.; Cancelliere, A.; Foti, E.
2014-12-01
During the last decades an increase of debris flow catastrophic events has been recorded along the Italian territory, mainly due to the increment of settlements and human activities in mountain areas. Considering the large extent of debris flow prone areas, non structural protection strategies should be preferably implemented because of economic constrains associated with structural mitigation measures. In such a framework hazard assessment methodologies play a key role representing useful tools for the development of emergency management policies. The aim of the present study is to apply an integrated debris flow hazard assessment methodology, where rainfall probabilistic analysis and physically-based landslide triggering and propagation models are combined. In particular, the probabilistic rainfall analysis provides the forcing scenarios of different return periods, which are then used as input to a model based on combination of the USGS TRIGRS and the FLO-2D codes. The TRIGRS model (Baum et al., 2008; 2010), developed for analyzing shallow landslide triggering is based on an analytical solution of linearized forms of the Richards' infiltration equation and an infinite-slope stability calculation to estimate the timing and locations of slope failures, while the FLO-2D (O'Brien 1986) is a two-dimensional finite difference model that simulates debris flow propagation following a mono-phase approach, based on empirical quadratic rheological relation developed by O'Brien and Julien (1985). Various aspects of the combination of the models are analyzed, giving a particular focus on the possible variations of triggered amounts compatible with a given return period. The methodology is applied to the case study area of the Messina Province in Italy, which has been recently struck by severe events, as the one of the 1st October 2009 which hit the Giampilieri Village causing 37 fatalities. Results are analyzed to assess the potential hazard that may affect the densely
[Invariants of the anthropometrical proportions].
Smolianinov, V V
2012-01-01
In this work a general interpretation of a modulor as scales of segments proportions of anthropometrical modules (extremities and a body) is made. The objects of this study were: 1) to reason the idea of the growth modulor; 2) using the modern empirical data, to prove the validity of a principle of linear similarity for anthropometrical segments; 3) to specify the system of invariants for constitutional anthropometrics.
Metacarpal proportions in Australopithecus africanus.
Green, David J; Gordon, Adam D
2008-05-01
Recent work has shown that, despite being craniodentally more derived, Australopithecus africanus had more apelike limb-size proportions than A. afarensis. Here, we test whether the A. africanus hand, as judged by metacarpal shaft and articular proportions, was similarly apelike. More specifically, did A. africanus have a short and narrow first metacarpal (MC1) relative to the other metacarpals? Proportions of both MC breadth and length were considered: the geometric mean (GM) of articular and midshaft measurements of MC1 breadth was compared to those of MC2-4, and MC1 length was compared to MC3 length individually and also to the GM of MC2 and 3 lengths. To compare the extant hominoid sample with an incomplete A. africanus fossil record (11 attributed metacarpals), a resampling procedure imposed sampling constraints on the comparative groups that produced composite intrahand ratios. Resampled ratios in the extant sample are not significantly different from actual ratios based on associated elements, demonstrating the methodological appropriateness of this technique. Australopithecus africanus metacarpals do not differ significantly from the great apes in the comparison of breadth ratios but are significantly greater than chimpanzees and orangutans in both measures of relative length. Conversely, A. africanus has a significantly smaller breadth ratio than modern humans, but does not significantly differ from this group in either measure of relative length. We conclude that the first metacarpals of A. africanus are more apelike in relative breadth while also being more humanlike in relative length, a finding consistent with previous work on A. afarensis hand proportions. This configuration would have likely promoted a high degree of manipulative dexterity, but the relatively slender, apelike first metacarpal suggests that A. africanus did not place the same mechanical demands on the thumb as more recent, stone-tool-producing hominins.
NASA Astrophysics Data System (ADS)
Taheri Andani, Masood; Elahinia, Mohammad
2014-01-01
In this work, a modified 3D model is presented to capture the multi-axial behavior of superelastic shape memory alloys (SMAs) under quasi-static isothermal or dynamic loading conditions. General experimental based equivalent stress and strain terms are introduced and improved flow rule and transformation surfaces are presented. The 3D constitutive equations are found for both isothermal and dynamic loading states. An extended experimental study is conducted on NiTi thin walled tubes to investigate the performance of the model. The proposed approach is shown to be able to capture the SMA response better than the original model in tension-torsion loading conditions.
Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J
2008-02-11
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.
Kang, Sangwook; Cai, Jianwen
2010-01-01
In stratified case-cohort designs, samplings of case-cohort samples are conducted via a stratified random sampling based on covariate information available on the entire cohort members. In this paper, we extended the work of Kang & Cai (2009) to a generalized stratified case-cohort study design for failure time data with multiple disease outcomes. Under this study design, we developed weighted estimating procedures for model parameters in marginal multiplicative intensity models and for the cumulative baseline hazard function. The asymptotic properties of the estimators are studied using martingales, modern empirical process theory, and results for finite population sampling. PMID:22442642
,
2000-01-01
Landslide hazards occur in many places around What Can You Do If You Live Near Steep Hills? the world and include fast-moving debris flows, slow-moving landslides, and a variety of flows and slides initiating from volcanoes. Each year, these hazards cost billions of dollars and cause numerous fatalities and injuries. Awareness and education about these hazards is a first step toward reducing damaging effects. The U.S. Geological Survey conducts research and distributes information about geologic hazards. This Fact Sheet is published in English and Spanish and can be reproduced in any form for further distribution.
NASA Astrophysics Data System (ADS)
Beauval, Céline; Scotti, Oona; Bonilla, Fabian
2006-05-01
Seismic hazard estimations are compared using two approaches based on two different seismicity models: one which models earthquake recurrence by applying the truncated Gutenberg-Richter law and a second one which smoothes the epicentre location of past events according to the fractal distribution of earthquakes in space (Woo 1996). The first method requires the definition of homogeneous source zones and the determination of maximum possible magnitudes whereas the second method requires the definition of a smoothing function. Our results show that the two approaches lead to similar hazard estimates in low seismicity regions. In regions of increased seismic activity, on the other hand, the smoothing approach yields systematically lower estimates than the zoning method. This epicentre-smoothing approach can thus be considered as a lower bound estimator for seismic hazard and can help in decision making in moderate seismicity regions where source zone definition and estimation of maximum possible magnitudes can lead to a wide variety of estimates due to lack of knowledge. The two approaches lead, however, to very different earthquake scenarios. Disaggregation studies at a representative number of sites show that if the distributions of contributions according to source-site distance are comparable between the two approaches, the distributions of contributions according to magnitude differ, reflecting the very different seismicity models used. The epicentre-smoothing method leads to scenarios with predominantly intermediate magnitudes events (5 <=M<= 5.5) while the zoning method leads to scenarios with magnitudes that increase with the return period from the minimum to the maximum magnitudes considered. These trends demonstrate that the seismicity model used plays a fundamental role in the determination of the controlling scenarios and ways to discriminate between the most appropriate models remains an important issue.
NASA Astrophysics Data System (ADS)
Mohammed, F.; Li, S.; Jalali Farahani, R.; Williams, C. R.; Astill, S.; Wilson, P. S.; B, S.; Lee, R.
2014-12-01
The past decade has been witness to two mega-tsunami events, 2004 Indian ocean tsunami and 2011 Japan tsunami and multiple major tsunami events; 2006 Java, Kuril Islands, 2007 Solomon Islands, 2009 Samoa and 2010 Chile, to name a few. These events generated both local and far field tsunami inundations with runup ranging from a few meters to around 40 m in the coastal impact regions. With a majority of the coastal population at risk, there is need for a sophisticated outlook towards catastrophe risk estimation and a quick mitigation response. At the same time tools and information are needed to aid advanced tsunami hazard prediction. There is an increased need for insurers, reinsurers and Federal hazard management agencies to quantify coastal inundations and vulnerability of coastal habitat to tsunami inundations. A novel tool is developed to model local and far-field tsunami generation, propagation and inundation to estimate tsunami hazards. The tool is a combination of the NOAA MOST propagation database and an efficient and fast GPU (Graphical Processing Unit)-based non-linear shallow water wave model solver. The tsunamigenic seismic sources are mapped on to the NOAA unit source distribution along subduction zones in the ocean basin. Slip models are defined for tsunamigenic seismic sources through a slip distribution on the unit sources while maintaining limits of fault areas. A GPU based finite volume solver is used to simulate non-linear shallow water wave propagation, inundation and runup. Deformation on the unit sources provide initial conditions for modeling local impacts, while the wave history from propagation database provides boundary conditions for far field impacts. The modeling suite provides good agreement with basins for basin wide tsunami propagation to validate local and far field tsunami inundations.
Systematic method for determining a hazardous air pollutant release scenario and model input
Eltgroth, M.W.; Touma, J.S.; Yeh, D.
1995-12-31
Title 3 of the 1990 Clean Air Act Amendments lists many chemicals as hazardous air pollutants and requires establishing regulations to prevent the accidental release and to minimize the consequence of any such releases. A method is described which aids in determining release type and characteristics from contained chemicals. The method described here refers to existing EPA documents, each of which partially perform the necessary calculations. An example calculation is included to determine the phase state prior to and at release.
NASA Astrophysics Data System (ADS)
Gusyev, M. A.; Kwak, Y.; Khairul, M. I.; Arifuzzaman, M. B.; Magome, J.; Sawano, H.; Takeuchi, K.
2015-06-01
This study introduces a flood hazard assessment part of the global flood risk assessment (Part 2) conducted with a distributed hydrological Block-wise TOP (BTOP) model and a GIS-based Flood Inundation Depth (FID) model. In this study, the 20 km grid BTOP model was developed with globally available data on and applied for the Ganges, Brahmaputra and Meghna (GBM) river basin. The BTOP model was calibrated with observed river discharges in Bangladesh and was applied for climate change impact assessment to produce flood discharges at each BTOP cell under present and future climates. For Bangladesh, the cumulative flood inundation maps were produced using the FID model with the BTOP simulated flood discharges and allowed us to consider levee effectiveness for reduction of flood inundation. For the climate change impacts, the flood hazard increased both in flood discharge and inundation area for the 50- and 100-year floods. From these preliminary results, the proposed methodology can partly overcome the limitation of the data unavailability and produces flood~maps that can be used for the nationwide flood risk assessment, which is presented in Part 2 of this study.
NASA Astrophysics Data System (ADS)
Poisel, R.; Preh, A.; Hofmann, R.; Schiffer, M.; Sausgruber, Th.
2009-04-01
A rock slide on to the clayey - silty - sandy - pebbly masses in the Gschliefgraben (Upper Austria province, Lake Traunsee) having occurred in 2006 as well as the humid autumn of 2007 triggered an earth flow comprising a volume up to 5 mill m³ and moving with a maximum displacement velocity of 5 m/day during the winter of 2007-2008. The possible damage was estimated up to 60 mill € due to possible destruction of houses and of a road to a settlement with heavy tourism. Exploratory drillings revealed that the moving mass consists of an alternate bedding of thicker, less permeable clayey - silty layers and thinner, more permeable silty - sandy - pebbly layers. The movement front ran ahead in the creek bed. Therefore it was assumed that water played an important role and the earth flow moved due to soaking of water into the ground from the area of the rock slide downslope. Inclinometer measurements showed that the uppermost, less permeable layer was sliding on a thin, more permeable layer. The movement process was analysed by numerical models (FLAC) and by conventional calculations in order to assess the hazard. The coupled flow and mechanical models showed that sections of the less permeable layer soaked with water were sliding on the thin, more permeable layer due to excessive watering out of the more permeable layer. These sections were thrust over the downward lying, less soaked areas, therefore having higher strength. The material thrust over the downward lying, less soaked areas together with the moving front of pore water pressures caused the downward material to fail and to be thrust over the downslope lying material in a distance of some 50 m. Thus a cyclic process was created without any indication of a sudden sliding of the complete less permeable layer. Nevertheless, the inhabitants of 15 houses had to be evacuated for safety reasons. They could return to their homes after displacement velocities had decreased. Displacement monitoring by GPS showed that
NASA Astrophysics Data System (ADS)
Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique
2016-04-01
Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling
Photodetectors for Scintillator Proportionality Measurement
Moses, William W.; Choong, Woon-Seng; Hull, Giulia; Payne, Steve; Cherepy, Nerine; Valentine, J.D.
2010-10-18
We evaluate photodetectors for use in a Compton Coincidence apparatus designed for measuring scintillator proportionality. There are many requirements placed on the photodetector in these systems, including active area, linearity, and the ability to accurately measure low light levels (which implies high quantum efficiency and high signal-to-noise ratio). Through a combination of measurement and Monte Carlo simulation, we evaluate a number of potential photodetectors, especially photomultiplier tubes and hybrid photodetectors. Of these, we find that the most promising devices available are photomultiplier tubes with high ({approx}50%) quantum efficiency, although hybrid photodetectors with high quantum efficiency would be preferable.
NASA Astrophysics Data System (ADS)
Li, L.; Switzer, A.; Chan, C. H.; Wang, Y.; Weiss, R.; Qiu, Q.
2015-12-01
It has long been recognized that rupture complexity, typically in the form of heterogeneous slip distribution pattern, has a significant effect on tsunami wave field. However, the effect of heterogeneous slip distributions is not commonly considered in probabilistic tsunami hazard assessment (PTHA) primarily due to its computational expense. To investigate the effect of heterogeneous slip distribution on PTHAs, we incorporate a stochastic source model into a Monte Carlo-type method for PTHA. Using a hybrid kinematic k-squared source model, we generate a broad range of slip distribution patterns for large numbers of synthetic earthquake events and assess tsunami hazard, as an example, for the South China Sea (SCS). Our result suggests, for a relatively small and confined region like the SCS, the commonly used approach based on the uniform slip distribution fault models could significantly underestimate tsunami hazard, especially on a longer time period. For 500-year return periods, the expected wave height along the coast of west Luzon, Taiwan, southeast China, east Vietnam is generally underestimated by 20-50 %. Notably, the underestimation is more pronounced (some locations reach >50%) for the expected tsunami wave height with a 1000-year return period. Also of note the probability of experiencing 1m tsunami wave in the next 100 years is underestimated by more than 40% in many coastal sites in southeast China and east Vietnam. As the results of PTHA commonly serve as the foundation for further risk assessments, this case study emphasizes how crucial it is to take the effect of rupture complexity into account.
NASA Astrophysics Data System (ADS)
Zahran, Hani M.; Sokolov, Vladimir; Roobol, M. John; Stewart, Ian C. F.; El-Hadidy Youssef, Salah; El-Hadidy, Mahmoud
2016-07-01
A new seismic source model has been developed for the western part of the Arabian Peninsula, which has experienced considerable earthquake activity in the historical past and in recent times. The data used for the model include an up-to-date seismic catalog, results of recent studies of Cenozoic faulting in the area, aeromagnetic anomaly and gravity maps, geological maps, and miscellaneous information on volcanic activity. The model includes 18 zones ranging along the Red Sea and the Arabian Peninsula from the Gulf of Aqaba and the Dead Sea in the north to the Gulf of Aden in the south. The seismic source model developed in this study may be considered as one of the basic branches in a logic tree approach for seismic hazard assessment in Saudi Arabia and adjacent territories.
Using the CABLES model to assess and minimize risk in research: control group hazards.
Koocher, G P
2002-01-01
CABLES is both an acronym and metaphor for conceptualizing research participation risk by considering 6 distinct domains in which risks of harm to research participants may exist: cognitive, affective, biological, legal, economic, and social/cultural. These domains are described and illustrated, along with suggestions for minimizing or eliminating the potential hazards to human participants in biomedical and behavioral science research. Adoption of a thoughtful ethical analysis addressing all 6 CABLES strands in designing research provides a strong protective step toward safeguarding and promoting the well-being of study participants.
Friedel, M.J.
2011-01-01
Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios. ?? 2011.
Plesko, Catherine S; Clement, R Ryan; Weaver, Robert P; Bradley, Paul A; Huebner, Walter F
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Kreemer, C.; Blewitt, G.
2007-05-01
In the United States, seismic hazard is evaluated officially by the U.S. Geological Survey and published as estimates in the National Seismic Hazard Maps (NSHM) that depict the peak ground shaking at a specific level of likelihood. In the western Great Basin, the 2002 NSHM is based on a combination of seismic, geologic and geodetic data. However, a discrepancy between the deformation rate that is inferred from the geodetic data (e.g. GPS) and geologic data (e.g. slip rates from fault studies) led to the introduction of an ad hoc zone of crustal shear strain in the western Basin and Range. Only then was the shaking risk portrayed in the NSHM consistent with the relative geodetic velocity of the Sierra Nevada microplate with respect to the central Great Basin. Since creation of the 2002 NSHM there has been a rapid increase in the quantity, quality and spatial coverage of GPS data in the western Great Basin, providing a vast improvement on the constraint on the pattern of crustal deformation. Thus geodesy is poised to make a substantial contribution to the spatial localization of seismic hazard in support of the next generation NSHM. In the Walker Lane ~10 mm/yr of relative motion are accommodated as shear and extension along a ~200 km wide and ~1000 km long zone of intracontinental deformation associated with the Pacific/North American plate boundary. We integrate GPS velocities obtained from sites in the continuous BARGEN, PBO, BARD, semi-continuous MAGNET network plus campaign results from numerous published results to constrain block models of crustal deformation. In so doing we estimate slip rates on block-bounding faults that have regional kinematic self-consistency and can be easily incorporated into the USGS algorithms that compute estimates for seismic hazard. Because of the large number and high density of candidate faults, and length of this zone we divide the region into three parts covering the Northern, Central and Southern Walker Lane. We have completed
NASA Astrophysics Data System (ADS)
Enzenhoefer, R.; Binning, P. J.; Nowak, W.
2015-09-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.
Proportional Reasoning: A Review of the Literature.
ERIC Educational Resources Information Center
Tourniaire, Francoise; Pulos, Steven
1985-01-01
The literature on proportional reasoning is reviewed. After methodology is discussed, strategies used to solve proportion problems, variables that influence performance, and training studies are each considered. (MNS)
Baruffi, F; Cisotto, A; Cimolino, A; Ferri, M; Monego, M; Norbiato, D; Cappelletto, M; Bisaglia, M; Pretner, A; Galli, A; Scarinci, A; Marsala, V; Panelli, C; Gualdi, S; Bucchignani, E; Torresan, S; Pasini, S; Critto, A; Marcomini, A
2012-12-01
Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life+ project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced
PHAZE. Parametric Hazard Function Estimation
Atwood, C.L.
1990-09-01
Phaze performs statistical inference calculations on a hazard function ( also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions.
CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.
Flood hazard and risk analysis in the southwest region of Bangladesh
NASA Astrophysics Data System (ADS)
Tingsanchali, Tawatchai; Fazlul Karim, Mohammed
2005-06-01
Flood hazard and risk assessment was conducted to identify the priority areas in the southwest region of Bangladesh for flood mitigation. Simulation of flood flow through the Gorai and Arial Khan river system and its floodplains was done by using a hydrodynamic model. After model calibration and verification, the model was used to simulate the flood flow of 100-year return period for a duration of four months. The maximum flooding depths at different locations in the rivers and floodplains were determined. The process in determining long flooding durations at every grid point in the hydrodynamic model is laborious and time-consuming. Therefore the flood durations were determined by using satellite images of the observed flood in 1988, which has a return period close to 100 years. Flood hazard assessment was done considering flooding depth and duration. By dividing the study area into smaller land units for hazard assessment, the hazard index and the hazard factor for each land unit for depth and duration of flooding were determined. From the hazard factors of the land units, a flood hazard map, which indicates the locations of different categories of hazard zones, was developed. It was found that 54% of the study area was in the medium hazard zone, 26% in the higher hazard zone and 20% in the lower hazard zone. Due to lack of sufficient flood damage data, flood damage vulnerability is simply considered proportional to population density. The flood risk factor of each land unit was determined as the product of the flood hazard factor and the vulnerability factor. Knowing the flood risk factors for the land units, a flood risk map was developed based on the risk factors. These maps are very useful for the inhabitants and floodplain management authorities to minimize flood damage and loss of human lives.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
McKone, T.E.
1994-01-01
Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out.
Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.
2014-01-01
The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.
NASA Astrophysics Data System (ADS)
Hébert, H.; Schindelé, F.; Heinrich, P.; Piatanesi, A.; Okal, E. A.
In French Polynesia, the Marquesas Islands are particularly prone to amplification of tsunamis generated at the Pacific Rim, due to relatively mild submarine slopes and to large open bays not protected by any coral reef. These islands are also threatened by local tsunamis, as shown by the recent 1999 event on Fatu Hiva. On September 13, 1999, Omoa Bay was struck by 2 to 5 m high water waves: several buildings, among them the school, were flooded and destroyed but no lives were lost. Observations gath- ered during a post-event survey revealed the recent collapse into the sea of a 300x300 m, at least 20-m thick, cliff located 5 km southeast of Omoa. This cliff failure most certainly triggered the tsunami waves since the cliff was reported intact 45 min earlier. We simulate the tsunami generation due to a subaerial landslide, using a finite- difference model assimilating the landslide to a flow of granular material. Numerical modeling shows that a 0.0024-km3 landslide located in the presumed source area ac- counts well for the tsunami waves reported in Omoa Bay. We show that the striking amplification observed in Omoa Bay is related to the trapping of waves due to the shallow submarine shelf surrounding the island. These results stress the local tsunami hazard that should be taken into account in the natural hazard assessment and mitiga- tion of the area, where historical cliff collapses can be observed and should happen again.
Powers, J.
1991-01-01
A number of terms (e.g., hazardous chemicals,'' hazardous materials,'' hazardous waste,'' and similar nomenclature) refer to substances that are subject to regulation under one or more federal environmental laws. State laws and regulations also provide additional, similar, or identical terminology that may be confused with the federally defined terms. Many of these terms appear synonymous, and it easy to use them interchangeably. However, in a regulatory context, inappropriate use of narrowly defined terms can lead to confusion about the substances referred to, the statutory provisions that apply, and the regulatory requirements for compliance under the applicable federal statutes. This information Brief provides regulatory definitions, a brief discussion of compliance requirements, and references for the precise terminology that should be used when referring to hazardous'' substances regulated under federal environmental laws. A companion CERCLA Information Brief (EH-231-004/0191) addresses toxic'' nomenclature.
The federal government has established a system of labeling hazardous materials to help identify the type of material and threat posed. Summaries of information on over 300 chemicals are maintained in the Envirofacts Master Chemical Integrator.
ERIC Educational Resources Information Center
Vandas, Steve
1998-01-01
Focuses on hurricanes and tsunamis and uses these topics to address other parts of the science curriculum. In addition to a discussion on beach erosion, a poster is provided that depicts these natural hazards that threaten coastlines. (DDR)
Atmospheric electrical modeling in support of the NASA F106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, J. H.
1986-01-01
With the use of composite (non-metallic) and microelectronics becoming more prevalent in the construction of both military and commercial aircraft, the control systems have become more susceptible to damage or failure from electromagnetic transients. One source of such transients is the lightning discharge. In order to study the effects of the lightning discharge on the vital components of an aircraft, NASA Langley Research Center has undertaken a Storm Hazards Program in which a specially instrumented F106B jet aircraft is flown into active thunderstorms with the intention of being struck by lightning. One of the specific purposes of the program is to quantify the environmental conditions which are conductive to aircraft lightning strikes.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-07-28
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
NASA Astrophysics Data System (ADS)
Miller, Craig A.; Williams-Jones, Glyn
2016-06-01
A new 3D geophysical model of the Mt Tongariro Volcanic Massif (TgVM), New Zealand, provides a high resolution view of the volcano's internal structure and hydrothermal system, from which we derive implications for volcanic hazards. Geologically constrained 3D inversions of potential field data provides a greater level of insight into the volcanic structure than is possible from unconstrained models. A complex region of gravity highs and lows (± 6 mGal) is set within a broader, ~ 20 mGal gravity low. A magnetic high (1300 nT) is associated with Mt Ngauruhoe, while a substantial, thick, demagnetised area occurs to the north, coincident with a gravity low and interpreted as representing the hydrothermal system. The hydrothermal system is constrained to the west by major faults, interpreted as an impermeable barrier to fluid migration and extends to basement depth. These faults are considered low probability areas for future eruption sites, as there is little to indicate they have acted as magmatic pathways. Where the hydrothermal system coincides with steep topographic slopes, an increased likelihood of landslides is present and the newly delineated hydrothermal system maps the area most likely to have phreatic eruptions. Such eruptions, while small on a global scale, are important hazards at the TgVM as it is a popular hiking area with hundreds of visitors per day in close proximity to eruption sites. The model shows that the volume of volcanic material erupted over the lifespan of the TgVM is five to six times greater than previous estimates, suggesting a higher rate of magma supply, in line with global rates of andesite production. We suggest that our model of physical property distribution can be used to provide constraints for other models of dynamic geophysical processes occurring at the TgVM.
McNamara, Daniel E.; Yeck, William; Barnhart, William D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, Amod; Hough, S.E.; Benz, Harley M.; Earle, Paul
2016-01-01
The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard.Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a ~ 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10–15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
NASA Astrophysics Data System (ADS)
Murru, M.; Akinci, A.; Console, R.; Falcone, G.; Pucci, S.
2014-12-01
We show the effect of time-independent and time-dependent occurrence models on the seismic hazard estimations. The time-dependence is introduced by 1) the Brownian Passage Time (BPT) probability model that is based on a simple physical model of the earthquake cycle, and 2) the fusion of the BPT renewal model with a physical model that considers the earthquake probability perturbation for interacting faults by static Coulomb stress changes We treat the uncertainties in the fault parameters (e.g. slip rate, characteristic magnitude and aperiodicity) of the statistical distribution associated to each examined fault source by a Monte Carlo technique. For a comparison among the results obtained from three different models, we give the probabilities of occurrence of earthquakes Mw > 6.5 for individual fault sources in the Marmara region, over the future 5-10-30 and 50 years, starting on January 1, 2013, considering the 10th, 50th and 90th percentiles of the Monte Carlo distribution. In order to evaluate the impact of the earthquake probability models to ground motion hazard we attempt to calculate the fault-based probabilistic seismic hazard maps (PSHA) of mean Peak Ground Acceleration (PGA) having 10% probability of exceedance in 50 years on rock site condition. We adopted only one Ground Motion Prediction Equation (GMPE) for the active shallow crustal region for assessing the ground shaking hazard in the Marmara region. We observed that the impact of the different occurrence models on the seismic hazard estimate of selected sites is quite high: the hazard may increase by more than 70% or decrease by as much as 70%, depending on the applied model in the selected sites. This difference mostly depends on the time elapsed after the latest major earthquake on a specific fault. We demonstrate that the estimated average recurrence time and the associated magnitude, together with the elapsed time, are crucial parameters in the earthquake probability calculations.
NASA Astrophysics Data System (ADS)
Matiella Novak, M. Alexandra
Volcanic ash clouds in the upper atmosphere (>10km) present a significant hazard to the aviation community and in some cases cause near-disastrous situations for aircraft that inadvertently encounter them. The two most commonly used techniques for mitigating hazards to aircraft from drifting volcanic clouds are (1) using data from satellite observations and (2) the forecasting of dispersion and trajectories with numerical models. This dissertation aims to aid in the mitigation of this hazard by using Moderate Infrared Resolution Spectroradiometer (MODIS) and Advanced Very High Resolution Radiometer (AVHRR) infrared (IR) satellite data to quantitatively analyze and constrain the uncertainties in the PUFF volcanic ash transport model. Furthermore, this dissertation has experimented with the viability of combining IR data with the PUFF model to increase the model's reliability. Comparing IR satellite data with forward transport models provides valuable information concerning the uncertainty and sensitivity of the transport models. A study analyzing the viability of combining satellite-based information with the PUFF model was also done. Factors controlling the cloud-shape evolution, such as the horizontal dispersion coefficient, vertical distribution of particles, the height of the cloud, and the location of the cloud were all updated based on observations from satellite data in an attempt to increase the reliability of the simulations. Comparing center of mass locations--calculated from satellite data--to HYSPLIT trajectory simulations provides insight into the vertical distribution of the cloud. A case study of the May 10, 2003 Anatahan Volcano eruption was undertaken to assess methods of calculating errors in PUFF simulations with respect to the transport and dispersion of the erupted cloud. An analysis of the factors controlling the cloud-shape evolution of the cloud in the model was also completed and compared to the shape evolution of the cloud observed in the
NASA Astrophysics Data System (ADS)
Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore
2016-04-01
Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using
Holland, B
1987-01-01
A hazards model was used to estimate the relative risks of infant mortality at various points during the 1st year of life among Malaysian infants who were breastfed for various durations. Data on infant mortality, breastfeeding, and social variables were derived from the retrospective Malaysian Family Life Survey. To provide adequate samples in subperiods of the 1st year of life, analysis intervals were constructed starting at ages 0, 2, 4, and 7 months, and including up to 13 months of exposure. The preferred models for the 1st 3 analysis intervals included breastfeeding as a predictor of infant mortality. It is a particularly significant determinant in the 1st and 3rd intervals. The relative risk of death among those who received food other than human milk was 6.26 compared to those who did not, and the infant who was never breastfed was 12 times more likely to die than the infant who was breastfed at some time. Infants breastfed for intermediate durations had intermediate effects estimates. In each analysis interval, the regression coefficient for unsupplemented breastfeeding was of larger magnitude than that for supplemented breastfeeding. Overall, this study shows that breastfeeding is an important determinant of infant mortality in Malaysia. Studies with larger samples are urged to confirm the preliminary finding of a monotonic relationship between breastfeeding duration and lower infant of mortality risks. However, this analysis demonstrates the utility of hazard model methodology as a powerful tool for calculating relative risk estimates when the sample size is relatively small and there are numerous covariates.
Pallister, J.S.; Hoblitt, R.P.; Crandell, D.R.; Mullineaux, D.R.
1992-01-01
Available geophysical and geologic data provide a simplified model of the current magmatic plumbing system of Mount St. Helens (MSH). This model and new geochemical data are the basis for the revised hazards assessment presented here. The assessment is weighted by the style of eruptions and the chemistry of magmas erupted during the past 500 years, the interval for which the most detailed stratigraphic and geochemical data are available. This interval includes the Kalama (A. D. 1480-1770s?), Goat Rocks (A.D. 1800-1857), and current eruptive periods. In each of these periods, silica content decreased, then increased. The Kalama is a large amplitude chemical cycle (SiO2: 57%-67%), produced by mixing of arc dacite, which is depleted in high field-strength and incompatible elements, with enriched (OIB-like) basalt. The Goat Rocks and current cycles are of small amplitude (SiO2: 61%-64% and 62%-65%) and are related to the fluid dynamics of magma withdrawal from a zoned reservoir. The cyclic behavior is used to forecast future activity. The 1980-1986 chemical cycle, and consequently the current eruptive period, appears to be virtually complete. This inference is supported by the progressively decreasing volumes and volatile contents of magma erupted since 1980, both changes that suggest a decreasing potential for a major explosive eruption in the near future. However, recent changes in seismicity and a series of small gas-release explosions (beginning in late 1989 and accompanied by eruption of a minor fraction of relatively low-silica tephra on 6 January and 5 November 1990) suggest that the current eruptive period may continue to produce small explosions and that a small amount of magma may still be present within the conduit. The gas-release explosions occur without warning and pose a continuing hazard, especially in the crater area. An eruption as large or larger than that of 18 May 1980 (???0.5 km3 dense-rock equivalent) probably will occur only if magma rises from
Zucca, J J; Walter, W R; Rodgers, A J; Richards, P; Pasyanos, M E; Myers, S C; Lay, T; Harris, D; Antoun, T
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags
Computer Models Used to Support Cleanup Decision Making at Hazardous and Radioactive Waste Sites
This report is a product of the Interagency Environmental Pathway Modeling Workgroup. This report will help bring a uniform approach to solving environmental modeling problems common to site remediation and restoration efforts.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... Availability. SUMMARY: The Notice of Availability (NOA) for adoption of TSTF-475, Revision 1, using the... (72 FR 63935). The prior NOA followed the CLIIP and contained a model safety evaluation, a model... (NSHCD). The purpose of this NOA is to revise the model proposed NSHCD. Technical Specifications...
NASA Astrophysics Data System (ADS)
Tappin, David R.
2015-04-01
the resolution necessary to identify the hazard from landslides, particularly along convergent margins where this hazard is the greatest. Multibeam mapping of the deep seabed requires low frequency sound sources that, because of their corresponding low resolution, cannot produce the detail required to identify the finest scale features. In addition, outside of most countries, there are not the repeat surveys that allow seabed changes to be identified. Perhaps only japan has this data. In the near future as research budgets shrink and ship time becomes ever expensive new strategies will have to be used to make best use of the vessels available. Remote AUV technology is almost certainly the answer, and should be increasingly utilised to map the seabed while the mother ship is better used to carry out other duties, such as sampling or seismic data acquisition. This will have the advantage in the deep ocean of acquiring higher resolution data from high frequency multibeams. This talk presents on a number of projects that show the evolution of the use of MBES in mapping submarine landslides since the PNG tsunami. Data from PNG is presented, together with data from Japan, Hawaii and the NE Atlantic. New multibeam acquisition methodologies are also discussed.
CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 storm-hazard projections
Barnard, Patrick; Erikson, Li; O'Neill, Andrea; Foxgrover, Amy; Herdman, Liv
2017-01-01
The Coastal Storm Modeling System (CoSMoS) makes detailed predictions (meter-scale) over large geographic scales (100s of kilometers) of storm-induced coastal flooding and erosion for both current and future SLR scenarios, as well as long-term shoreline change and cliff retreat. Resulting projections for future climate scenarios (sea-level rise and storms) provide emergency responders and coastal planners with critical storm-hazards information that can be used to increase public safety, mitigate physical damages, and more effectively manage and allocate resources within complex coastal settings. Several versions of CoSMoS have been implemented for areas of the California coast, including Southern California, Central California, and San Francisco Bay, and further versions will be incorporated as additional regions and improvements are developed.
Chen, Ling; Sun, Jianguo; Xiong, Chengjie
2016-01-01
Clustered interval-censored failure time data can occur when the failure time of interest is collected from several clusters and known only within certain time intervals. Regression analysis of clustered interval-censored failure time data is discussed assuming that the data arise from the semiparametric additive hazards model. A multiple imputation approach is proposed for inference. A major advantage of the approach is its simplicity because it avoids estimating the correlation within clusters by implementing a resampling-based method. The presented approach can be easily implemented by using the existing software packages for right-censored failure time data. Extensive simulation studies are conducted, indicating that the proposed imputation approach performs well for practical situations. The proposed approach also performs well compared to the existing methods and can be more conveniently applied to various types of data representation. The proposed methodology is further demonstrated by applying it to a lymphatic filariasis study. PMID:27773956
NASA Astrophysics Data System (ADS)
Meletti, C.
2013-05-01
In 2003, a large national project fur updating the seismic hazard map and the seismic zoning in Italy started, according to the rules fixed by an Ordinance by Italian Prime Minister. New input elements for probabilistic seismic hazard assessment were compiled: the earthquake catalogue, the seismogenic zonation, the catalogue completeness, a set of new attenuation relationships. The map of expected PGA on rock soil condition with 10% probability of exceedance is the new reference seismic hazard map for Italy (http://zonesismiche.mi.ingv.it). In the following, further 9 probabilities of exceedance and the uniform hazard spectra up to 2 seconds together with the disaggregation of the PGA was also released. A comprehensive seismic hazard model that fully describes the seismic hazard in Italy was then available, accessible by a webGis application (http://esse1-gis.mi.ingv.it/en.php). The detailed information make possible to change the approach for evaluating the proper seismic action for designing: from a zone-dependent approach (in Italy there were 4 seismic zones, each one with a single design spectrum) to a site-dependent approach: the design spectrum is now defined at each site of a grid of about 11000 points covering the whole national territory. The new building code becomes mandatory only after the 6 April 2009 L'Aquila earthquake, the first strong event in Italy after the release of the seismic hazard map. The large number of recordings and the values of the experienced accelerations suggested the comparisons between the recorded spectra and spectra defined in the seismic codes Even if such comparisons could be robust only after several consecutive 50-year periods of observation and in a probabilistic approach it is not a single observation that can validate or not the hazard estimate, some of the comparisons that can be undertaken between the observed ground motions and the hazard model used for the seismic code have been performed and have shown that the
Fthenakis, V.M.; Blewitt, D.N.; Hague, W.J.
1995-05-01
OSHA Process Safety Management guidelines suggest that a facility operator investigate and document a plan for installing systems to detect, contain, or mitigate accidental releases if such systems are not already in place. In addition, proposed EPA 112(r) regulations would require such analysis. This paper illustrates how mathematical modelling can aid such an evaluation and describes some recent enhancements of the HGSPRAY model: (1) Adding algorithms for modeling NH{sub 3} and LNG mitigation; (2) Modeling spraying of releases with fire water monitors encircling the point of release; (3) Combining wind tunnel modeling with mathematical modeling; and (4) Linking HGSPRAY and BEGADAS. Case cases are presented as examples of how HGSPRAY can aid the design of water spray systems for initiation of toxic gases (e.g., BF, NH,) or dilution/dispersion of flammable vapors (e.g., LNG).
Scaling and universality in proportional elections.
Fortunato, Santo; Castellano, Claudio
2007-09-28
A most debated t