Proportional Hazards Models of Graduation
ERIC Educational Resources Information Center
Chimka, Justin R.; Reed-Rhoads, Teri; Barker, Kash
2008-01-01
Survival analysis is a statistical tool used to describe the duration between events. Many processes in medical research, engineering, and economics can be described using survival analysis techniques. This research involves studying engineering college student graduation using Cox proportional hazards models. Among male students with American…
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805
A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2016-01-01
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
ERIC Educational Resources Information Center
Rasmussen, Andrew
2004-01-01
This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
Devarajan, Karthik; Ebrahimi, Nader
2010-01-01
The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike’s Information Criterion is developed. We illustrate the applicability of our approach using real-life data. PMID:21076652
Devarajan, Karthik; Ebrahimi, Nader
2011-01-01
The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike's Information Criterion is developed. We illustrate the applicability of our approach using real-life data. PMID:21076652
On Estimation of Covariate-Specific Residual Time Quantiles under the Proportional Hazards Model
Crouch, Luis Alexander; May, Susanne; Chen, Ying Qing
2015-01-01
Estimation and inference in time-to-event analysis typically focus on hazard functions and their ratios under the Cox proportional hazards model. These hazard functions, while popular in the statistical literature, are not always easily or intuitively communicated in clinical practice, such as in the settings of patient counseling or resource planning. Expressing and comparing quantiles of event times may allow for easier understanding. In this article we focus on residual time, i.e., the remaining time-to-event at an arbitrary time t given that the event has yet to occur by t. In particular, we develop estimation and inference procedures for covariate-specific quantiles of the residual time under the Cox model. Our methods and theory are assessed by simulations, and demonstrated in analysis of two real data sets. PMID:26058825
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny
2003-01-01
In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.
Proportional-hazards models for improving the analysis of light-water-reactor-component failure data
Booker, J.B.; Johnson, M.E.; Easterling, R.G.
1981-01-01
The reliability of a power plant component may depend on a variety of factors (or covariates). If a single regression model can be specified to relate these factors to the failure rate, then all available data can be used to estimate and test for the effects of these covariates. One such model is a proportional hazards function that is specified as a product of two terms: a nominal hazard rate that is a function of time and a second term that is a function of the covariates. The purpose of this paper is to adapt two such models to LWR valve failure rate analysis, to compare the results, and to discuss the strengths and weaknesses of these applications.
Measures to assess the prognostic ability of the stratified Cox proportional hazards model.
2009-02-01
Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures would be useful in analyses of individual participant data from multiple studies, data from multi-centre studies, and in single study analysis where stratification is used to avoid making assumptions of proportional hazards. We have chosen three measures developed for the unstratified CPH model (Schemper and Henderson's V , Harrell's C-index and Royston and Sauerbrei's D), adapted them for use with the stratified CPH model and demonstrated how their values can be represented over time. Although each of these measures is promising in principle, we found the measure of explained variation V very difficult to apply when data are combined from several studies with differing durations of participant follow-up. The two other measures considered, D and the C-index, were more applicable under such circumstances. We illustrate the methods using individual participant data from several prospective epidemiological studies of chronic disease outcomes. PMID:18833567
Jackson, Dan; White, Ian R; Seaman, Shaun; Evans, Hannah; Baisley, Kathy; Carpenter, James
2014-11-30
The Cox proportional hazards model is frequently used in medical statistics. The standard methods for fitting this model rely on the assumption of independent censoring. Although this is sometimes plausible, we often wish to explore how robust our inferences are as this untestable assumption is relaxed. We describe how this can be carried out in a way that makes the assumptions accessible to all those involved in a research project. Estimation proceeds via multiple imputation, where censored failure times are imputed under user-specified departures from independent censoring. A novel aspect of our method is the use of bootstrapping to generate proper imputations from the Cox model. We illustrate our approach using data from an HIV-prevention trial and discuss how it can be readily adapted and applied in other settings. PMID:25060703
Jackson, Dan; White, Ian R; Seaman, Shaun; Evans, Hannah; Baisley, Kathy; Carpenter, James
2014-01-01
The Cox proportional hazards model is frequently used in medical statistics. The standard methods for fitting this model rely on the assumption of independent censoring. Although this is sometimes plausible, we often wish to explore how robust our inferences are as this untestable assumption is relaxed. We describe how this can be carried out in a way that makes the assumptions accessible to all those involved in a research project. Estimation proceeds via multiple imputation, where censored failure times are imputed under user-specified departures from independent censoring. A novel aspect of our method is the use of bootstrapping to generate proper imputations from the Cox model. We illustrate our approach using data from an HIV-prevention trial and discuss how it can be readily adapted and applied in other settings. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:25060703
On penalized likelihood estimation for a non-proportional hazards regression model.
Devarajan, Karthik; Ebrahimi, Nader
2013-07-01
In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times. PMID:24791034
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
Sparse estimation of Cox proportional hazards models via approximated information criteria.
Su, Xiaogang; Wijayasinghe, Chalani S; Fan, Juanjuan; Zhang, Ying
2016-09-01
We propose a new sparse estimation method for Cox (1972) proportional hazards models by optimizing an approximated information criterion. The main idea involves approximation of the ℓ0 norm with a continuous or smooth unit dent function. The proposed method bridges the best subset selection and regularization by borrowing strength from both. It mimics the best subset selection using a penalized likelihood approach yet with no need of a tuning parameter. We further reformulate the problem with a reparameterization step so that it reduces to one unconstrained nonconvex yet smooth programming problem, which can be solved efficiently as in computing the maximum partial likelihood estimator (MPLE). Furthermore, the reparameterization tactic yields an additional advantage in terms of circumventing postselection inference. The oracle property of the proposed method is established. Both simulated experiments and empirical examples are provided for assessment and illustration. PMID:26873398
A Bayesian proportional hazards regression model with non-ignorably missing time-varying covariates
Bradshaw, Patrick T.; Ibrahim, Joseph G.; Gammon, Marilie D.
2010-01-01
Missing covariate data is common in observational studies of time to an event, especially when covariates are repeatedly measured over time. Failure to account for the missing data can lead to bias or loss of efficiency, especially when the data are non-ignorably missing. Previous work has focused on the case of fixed covariates rather than those that are repeatedly measured over the follow-up period, so here we present a selection model that allows for proportional hazards regression with time-varying covariates when some covariates may be non-ignorably missing. We develop a fully Bayesian model and obtain posterior estimates of the parameters via the Gibbs sampler in WinBUGS. We illustrate our model with an analysis of post-diagnosis weight change and survival after breast cancer diagnosis in the Long Island Breast Cancer Study Project (LIBCSP) follow-up study. Our results indicate that post-diagnosis weight gain is associated with lower all-cause and breast cancer specific survival among women diagnosed with new primary breast cancer. Our sensitivity analysis showed only slight differences between models with different assumptions on the missing data mechanism yet the complete case analysis yielded markedly different results. PMID:20960582
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Mortality and socio-economic differences in Denmark: a competing risks proportional hazard model.
Munch, Jakob Roland; Svarer, Michael
2005-03-01
This paper explores how mortality is related to such socio-economic factors as education, occupation, skill level and income for the years 1992-1997 using an extensive sample of the Danish population. We employ a competing risks proportional hazard model to allow for different causes of death. This method is important as some factors have unequal (and sometimes opposite) influence on the cause-specific mortality rates. We find that the often-found inverse correlation between socio-economic status and mortality is to a large degree absent among Danish women who die of cancer. In addition, for men the negative correlation between socio-economic status and mortality prevails for some diseases, but for women we find that factors such as being married, income, wealth and education are not significantly associated with higher life expectancy. Marriage increases the likelihood of dying from cancer for women, early retirement prolongs survival for men, and homeownership increases life expectancy in general. PMID:15722260
NASA Astrophysics Data System (ADS)
Zhang, Chao; Wang, Shaoping; Bai, Guanghan
2014-02-01
Solid lubricated bearings are important mechanical components in space, and accelerated life tests (ALT) of them are widely conducted. ALT model is needed to give the lifetime of solid lubricated bearings with ALT data, and former accelerated life test models of solid lubricated models are mainly statistical models, while physical models can imply an understanding of the failure mechanism and are preferred whenever possible. This paper proposes a physical model, which is called copula dependent proportional hazards model. A solid lubricated bearing is considered as a system consisting of several dependent items and Clayton copula function is used to describe the dependence. Proportional hazard effect is also considered to build the model. An ALT of solid lubricated bearing is carried out and the results show that this model is effective.
Kim, Haesook Teresa; Gray, Robert
2013-01-01
BAKGROUND Cure rate models have been extensively studied and widely used in time-to-event data in cancer clinical trials. PURPOSE Although cure rate models based on the generalized exponential distribution have been developed, they have not been used in the design of randomized cancer clinical trials, which instead have relied exclusively on two-component exponential cure rate model with a proportional hazards alternative. In some studies, the efficacy of the experimental treatment is expected to emerge some time after randomization. Since this does not conform to a proportional hazards alternative, such studies require a more flexible model to describe the alternative hypothesis. METHODS In this article, we report the study design of a phase III clinical trial of acute myeloid leukemia using a three-component exponential cure rate model to reflect the alternative hypothesis. A newly developed power calculation program that does not require proportional hazards assumption was used. RESULTS Using a custom-made three-component cure rate model as an alternative hypothesis, the proposed sample size was 409, compared with a sample size of 209 under the assumption of exponential distribution, and 228 under the proportional hazards alternative. A simulation study was performed to present the degree of power loss when the alternative hypothesis is not appropriately specified. LIMITATIONS The power calculation program used in this study is for a single analysis and does not account for group sequential tests in phase III trials. However, the loss in power is small and this was handled by inflating the sample size by 5%. CONCLUSION Misspecification of the alternative hypothesis can result in a seriously underpowered study. We report examples of clinical trials that required a custom-made alternative hypothesis to reflect a later indication of experimental treatment efficacy. The proposed three-component cure rate model could be very useful for specifying non-proportional
Testing Goodness-of-Fit for the Proportional Hazards Model based on Nested Case-Control Data
Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau
2014-01-01
Summary Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this paper, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen’s maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms’ Tumor Study is given to illustrate the methodology. PMID:25298193
Testing goodness-of-fit for the proportional hazards model based on nested case-control data.
Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau
2014-12-01
Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this article, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen's maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms' Tumor Study is given to illustrate the methodology. PMID:25298193
Li, Shuli; Gray, Robert J
2016-09-01
We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups. PMID:26799700
Zavadilová, L; Němcová, E; Stípková, M
2011-08-01
Relationships between conformation traits and functional longevity in Holstein cows were evaluated using survival analysis. Functional longevity was defined as the number of days between the first calving and culling; that is, length of productive life. The data set consisted of 116,369 Holstein cows that first calved from 2003 to 2008. All cows used in the analysis were scored for conformation between d 30 and d 210 of their first lactation. The data included 48% censored records. Analyses were done separately for 20 linear descriptive type traits, 6 composite traits, and height at sacrum measured in centimeters. Cox proportional hazard models were fitted to analyze data. The hazard function was described as the product of a baseline hazard function and the time-independent effects of age at first calving and sire (random), and the time-dependent effects of stage of lactation and lactation number, herd, year and season, herd size, and 305-d milk production. The strongest relationship between a composite trait and functional longevity was for dairy form, followed by udder and final score. Among the descriptive type traits, the strongest relationships with longevity were found for body condition score, angularity, traits related to udder attachment, and udder depth. Foot and leg traits showed substantially lower effect on functional longevity, and the effect of foot angle was minimal. Functional longevity declined with decreased body condition score of cows. Cows with deep udders had significantly lower functional survival compared with cows with shallow udders. In addition, weak central ligament was associated with significant reduction of cow longevity. For dairy form and angularity, cows classified as very good were the worst with respect to longevity, whereas cows classified as poor were the best. An intermediate optimum was evident for rear legs rear view and rear legs set (side view), whereas cows with sickled legs had lower longevity than cows with straighter
Casellas, J
2016-03-01
Age at first lambing (AFL) plays a key role on the reproductive performance of sheep flocks, although there are no genetic selection programs accounting for this trait in the sheep industry. This could be due to the non-Gaussian distribution pattern of AFL data, which must be properly accounted for by the analytical model. In this manuscript, two different parameterizations were implemented to analyze AFL in the Ripollesa sheep breed, that is, the skew-Gaussian mixed linear model (sGML) and the piecewise Weibull proportional hazards model (PWPH). Data were available from 10 235 ewes born between 1972 and 2013 in 14 purebred Ripollesa flocks located in the north-east region of Spain. On average, ewes gave their first lambing short after their first year and a half of life (590.9 days), and within-flock averages ranged between 523.4 days and 696.6 days. Model fit was compared using the deviance information criterion (DIC; the smaller the DIC statistic, the better the model fit). Model sGML was clearly penalized (DIC=200 059), whereas model PWPH provided smaller estimates and reached the minimum DIC when one cut point was added to the initial Weibull model (DIC=132 545). The pure Weibull baseline and parameterizations with two or more cut points were discarded due to larger DIC estimates (>134 200). The only systematic effect influencing AFL was the season of birth, where summer- and fall-born ewes showed a remarkable shortening of their AFL, whereas neither birth type nor birth weight had a relevant impact on this reproductive trait. On the other hand, heritability on the original scale derived from model PWPH was high, with a model estimate place at 0.114 and its highest posterior density region ranging from 0.079 and 0.143. As conclusion, Gaussian-related mixed linear models should be avoided when analyzing AFL, whereas model PWPH must be viewed as better alternative with superior goodness of fit; moreover, the additive genetic background underlying this
Gilbert, Peter B.; Sun, Yanqing
2014-01-01
This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model in the presence of missing marks. The motivating application is preventive HIV vaccine efficacy trials, where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. The test statistics are constructed based on two-stage efficient estimators, which utilize auxiliary predictors of the missing marks. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The methods are applied to the RV144 vaccine trial. PMID:25641990
Meadows, Cheyney; Rajala-Schultz, Päivi J; Frazer, Grant S; Meiring, Richard W; Hoblet, Kent H
2006-12-18
An observational study was conducted in order to assess the impact of a contract breeding program on the reproductive performance in a selected group of Ohio dairies using event-time analysis. The contract breeding program was offered by a breeding co-operative and featured tail chalking and daily evaluation of cows for insemination by co-operative technicians. Dairy employees no longer handled estrus detection activities. Between early 2002 and mid-2004, test-day records related to production and reproduction were obtained for 16,453 lactations representing 11,398 cows in a non-random sample of 31 dairies identified as well-managed client herds of the breeding co-operative. Of the 31 herds, 15 were using the contract breeding at the start of the data acquisition period, having started in the previous 2 years. The remaining 16 herds managed their own breeding program and used the co-operative for semen purchase. Cox proportional hazards modeling techniques were used to estimate the association of the contract breeding, as well as the effect of other significant predictors, with the hazard of pregnancy. Two separate Cox models were developed and compared: one that only considered fixed covariates and a second that included both fixed and time-varying covariates. Estimates of effects were expressed as the hazard ratio (HR) for pregnancy. Results of the fixed covariates model indicated that, controlling for breed, herd size, use of ovulation synchronization protocols in the herd, whether somatic cell score exceeded 4.5 prior to pregnancy or censoring, parity, calving season, and maximum test-day milk prior to pregnancy or censoring, the contract breeding program was associated with an increased hazard of pregnancy (HR=1.315; 95% CI 1.261-1.371). The results of the time-varying covariates model, which controlled for breed, herd size, use of ovulation synchronization protocols, somatic cell score above 4.5, parity, calving season, and testing season also found that the
Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita
2016-05-15
Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes. PMID:26765502
Vatcheva, KP; Lee, M; McCormick, JB; Rahbar, MH
2016-01-01
Objective To demonstrate the adverse impact of ignoring statistical interactions in regression models used in epidemiologic studies. Study design and setting Based on different scenarios that involved known values for coefficient of the interaction term in Cox regression models we generated 1000 samples of size 600 each. The simulated samples and a real life data set from the Cameron County Hispanic Cohort were used to evaluate the effect of ignoring statistical interactions in these models. Results Compared to correctly specified Cox regression models with interaction terms, misspecified models without interaction terms resulted in up to 8.95 fold bias in estimated regression coefficients. Whereas when data were generated from a perfect additive Cox proportional hazards regression model the inclusion of the interaction between the two covariates resulted in only 2% estimated bias in main effect regression coefficients estimates, but did not alter the main findings of no significant interactions. Conclusions When the effects are synergic, the failure to account for an interaction effect could lead to bias and misinterpretation of the results, and in some instances to incorrect policy decisions. Best practices in regression analysis must include identification of interactions, including for analysis of data from epidemiologic studies.
Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-pei
2016-01-01
A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example. PMID:27120450
On graphical tests for proportionality of hazards in two samples.
Sahoo, Shyamsundar; Sengupta, Debasis
2016-03-15
In this paper, we present a class of graphical tests of the proportional hazards hypothesis for two-sample censored survival data. The proposed tests are improvements over some existing tests based on asymptotic confidence bands of certain functions of the estimated cumulative hazard functions. The new methods are based on the comparison of unrestricted estimates of the said functions and their restricted versions under the hypothesis. They combine the rigour of analytical tests with the descriptive value of plots. Monte Carlo simulations suggest that the proposed asymptotic procedures have reasonable small sample properties. The power is much higher than existing graphical tests and comparable with existing analytical tests. The method is then illustrated through the analysis of a data set on bone marrow transplantation for Leukemia patients. PMID:26522814
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26924122
Estimating proportions of materials using mixture models
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Basu, R.
1983-01-01
An approach to proportion estimation based on the notion of a mixture model, appropriate parametric forms for a mixture model that appears to fit observed remotely sensed data, methods for estimating the parameters in these models, methods for labelling proportion determination from the mixture model, and methods which use the mixture model estimates as auxiliary variable values in some proportion estimation schemes are addressed.
Cologne, John; Hsu, Wan-Ling; Abbott, Robert D; Ohishi, Waka; Grant, Eric J; Fujiwara, Saeko; Cullings, Harry M
2012-07-01
In epidemiologic cohort studies of chronic diseases, such as heart disease or cancer, confounding by age can bias the estimated effects of risk factors under study. With Cox proportional-hazards regression modeling in such studies, it would generally be recommended that chronological age be handled nonparametrically as the primary time scale. However, studies involving baseline measurements of biomarkers or other factors frequently use follow-up time since measurement as the primary time scale, with no explicit justification. The effects of age are adjusted for by modeling age at entry as a parametric covariate. Parametric adjustment raises the question of model adequacy, in that it assumes a known functional relationship between age and disease, whereas using age as the primary time scale does not. We illustrate this graphically and show intuitively why the parametric approach to age adjustment using follow-up time as the primary time scale provides a poor approximation to age-specific incidence. Adequate parametric adjustment for age could require extensive modeling, which is wasteful, given the simplicity of using age as the primary time scale. Furthermore, the underlying hazard with follow-up time based on arbitrary timing of study initiation may have no inherent meaning in terms of risk. Given the potential for biased risk estimates, age should be considered as the preferred time scale for proportional-hazards regression with epidemiologic follow-up data when confounding by age is a concern. PMID:22517300
Agogo, George O; van der Voet, Hilko; Van't Veer, Pieter; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-07-01
Dietary questionnaires are prone to measurement error, which bias the perceived association between dietary intake and risk of disease. Short-term measurements are required to adjust for the bias in the association. For foods that are not consumed daily, the short-term measurements are often characterized by excess zeroes. Via a simulation study, the performance of a two-part calibration model that was developed for a single-replicate study design was assessed by mimicking leafy vegetable intake reports from the multicenter European Prospective Investigation into Cancer and Nutrition (EPIC) study. In part I of the fitted two-part calibration model, a logistic distribution was assumed; in part II, a gamma distribution was assumed. The model was assessed with respect to the magnitude of the correlation between the consumption probability and the consumed amount (hereafter, cross-part correlation), the number and form of covariates in the calibration model, the percentage of zero response values, and the magnitude of the measurement error in the dietary intake. From the simulation study results, transforming the dietary variable in the regression calibration to an appropriate scale was found to be the most important factor for the model performance. Reducing the number of covariates in the model could be beneficial, but was not critical in large-sample studies. The performance was remarkably robust when fitting a one-part rather than a two-part model. The model performance was minimally affected by the cross-part correlation. PMID:27003183
Lachin, John M.
2013-01-01
Summary General expressions are described for the evaluation of sample size and power for the K group Mantel-logrank test or the Cox PH model score test. Under an exponential model, the method of Lachin and Foulkes [1] for the 2 group case is extended to the K ≥ 2 group case using the non-centrality parameter of the K – 1 df chi-square test. Similar results are also shown to apply to the K group score test in a Cox PH model. Lachin and Foulkes [1] employed a truncated exponential distribution to provide for a non-linear rate of enrollment. Expressions for the mean time of enrollment and the expected follow-up time in the presence of exponential losses-to-follow-up are presented. When used with the expression for the non-centrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with R years of recruitment and T years total duration. Sample size and power are also described for a stratified-adjusted K group test and for the assessment of a group by stratum interaction. Similarly computations are described for a stratified-adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. PMID:23670965
Crager, Michael R.; Tang, Gong
2015-01-01
We propose a method for assessing an individual patient’s risk of a future clinical event using clinical trial or cohort data and Cox proportional hazards regression, combining the information from several studies using meta-analysis techniques. The method combines patient-specific estimates of the log cumulative hazard across studies, weighting by the relative precision of the estimates, using either fixed- or random-effects meta-analysis calculations. Risk assessment can be done for any future patient using a few key summary statistics determined once and for all from each study. Generalizations of the method to logistic regression and linear models are immediate. We evaluate the methods using simulation studies and illustrate their application using real data. PMID:26664111
Progress in studying scintillator proportionality: Phenomenological model
Bizarri, Gregory; Cherepy, Nerine; Choong, Woon-Seng; Hull, Giulia; Moses, William; Payne, Sephen; Singh, Jai; Valentine, John; Vasilev, Andrey; Williams, Richard
2009-04-30
We present a model to describe the origin of non-proportional dependence of scintillator light yield on the energy of an ionizing particle. The non-proportionality is discussed in terms of energy relaxation channels and their linear and non-linear dependences on the deposited energy. In this approach, the scintillation response is described as a function of the deposited energy deposition and the kinetic rates of each relaxation channel. This mathematical framework allows both a qualitative interpretation and a quantitative fitting representation of scintillation non-proportionality response as function of kinetic rates. This method was successfully applied to thallium doped sodium iodide measured with SLYNCI, a new facility using the Compton coincidence technique. Finally, attention is given to the physical meaning of the dominant relaxation channels, and to the potential causes responsible for the scintillation non-proportionality. We find that thallium doped sodium iodide behaves as if non-proportionality is due to competition between radiative recombinations and non-radiative Auger processes.
NASA CONNECT: Proportionality: Modeling the Future
NASA Technical Reports Server (NTRS)
2000-01-01
'Proportionality: Modeling the Future' is the sixth of seven programs in the 1999-2000 NASA CONNECT series. Produced by NASA Langley Research Center's Office of Education, NASA CONNECT is an award-winning series of instructional programs designed to enhance the teaching of math, science and technology concepts in grades 5-8. NASA CONNECT establishes the 'connection' between the mathematics, science, and technology concepts taught in the classroom and NASA research. Each program in the series supports the national mathematics, science, and technology standards; includes a resource-rich teacher guide; and uses a classroom experiment and web-based activity to complement and enhance the math, science, and technology concepts presented in the program. NASA CONNECT is FREE and the programs in the series are in the public domain. Visit our web site and register. http://connect.larc.nasa.gov 'Proportionality: Modeling the Future', students will examine how patterns, measurement, ratios, and proportions are used in the research, development, and production of airplanes.
Boron-10 Lined Proportional Counter Model Validation
Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.
2012-06-30
The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.
Validation of a heteroscedastic hazards regression model.
Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin
2002-03-01
A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial. PMID:11878222
ERIC Educational Resources Information Center
Fleener, M. Jayne
Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…
Tosteson, Tor D.; Morden, Nancy E.; Stukel, Therese A.; O'Malley, A. James
2014-01-01
The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival. PMID:25506259
Identifying and modeling safety hazards
DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.
2000-03-29
The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.
Populational Growth Models Proportional to Beta Densities with Allee Effect
NASA Astrophysics Data System (ADS)
Aleixo, Sandra M.; Rocha, J. Leonel; Pestana, Dinis D.
2009-05-01
We consider populations growth models with Allee effect, proportional to beta densities with shape parameters p and 2, where the dynamical complexity is related with the Malthusian parameter r. For p>2, these models exhibit a population dynamics with natural Allee effect. However, in the case of 1
models do not include this effect. In order to inforce it, we present some alternative models and investigate their dynamics, presenting some important results.
2013-01-01
Background In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox’s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically. PMID:23883000
NASA Technical Reports Server (NTRS)
Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.
1998-01-01
New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1996-01-01
In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.
Models of volcanic eruption hazards
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
Unified constitutive modeling for proportional and nonproportional cyclic plasticity responses
NASA Astrophysics Data System (ADS)
Krishna, Shree
Several features of cyclic plasticity, e.g. cyclic hardening/softening, ratcheting, relaxation, and their dependence on strain range, nonproportionality of loading, time, and temperature determine the stress-strain responses of materials under cyclic loading. Numerous efforts have been made in the past decades to characterize and model these responses. Many of these responses can be simulated reasonably by the existing constitutive models, but the same models would fail in simulating the structural responses, local stress-strain or global deformation. One of the reasons for this deficiency is that the constitutive models are not robust enough to simulate the cyclic plasticity responses when they interact with each other. This deficiency can be understood better or resolved by developing and validating constitutive models against a broad set of experimental responses and two or more of the responses interacting with each other. This dissertation develops a unified constitutive model by studying the cyclic plasticity features in an integrated manner and validating the model by simulating a broad set of proportional and nonproportional cyclic plasticity responses. The study demonstrates the drawbacks of the existing nonlinear kinematic hardening model originally developed by Chaboche and then develop and incorporate novel ideas into the model for improving its cyclic response simulations. The Chaboche model is modified by incorporating strain-range dependent cyclic hardening/softening through the kinematic hardening rule parameters, in addition to the conventional method of using only the isotropic hardening parameters. The nonproportional loading memory parameters of Tanaka and of Benallal and Marquis are incorporated to study the influence of nonproportionality. The model is assessed by simulating hysteresis loop shape, cyclic hardening-softening, cross-effect, cyclic relaxation, subsequent cyclic softening, and finally a series of ratcheting responses under
Lau, Bryan; Cole, Stephen R.; Gange, Stephen J.
2010-01-01
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event. PMID:21337360
Spatial extended hazard model with application to prostate cancer survival.
Li, Li; Hanson, Timothy; Zhang, Jiajia
2015-06-01
This article develops a Bayesian semiparametric approach to the extended hazard model, with generalization to high-dimensional spatially grouped data. County-level spatial correlation is accommodated marginally through the normal transformation model of Li and Lin (2006, Journal of the American Statistical Association 101, 591-603), using a correlation structure implied by an intrinsic conditionally autoregressive prior. Efficient Markov chain Monte Carlo algorithms are developed, especially applicable to fitting very large, highly censored areal survival data sets. Per-variable tests for proportional hazards, accelerated failure time, and accelerated hazards are efficiently carried out with and without spatial correlation through Bayes factors. The resulting reduced, interpretable spatial models can fit significantly better than a standard additive Cox model with spatial frailties. PMID:25521422
ERIC Educational Resources Information Center
Tjoe, Hartono; de la Torre, Jimmy
2014-01-01
In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the…
Degrees of Freedom in Modeling: Taking Certainty out of Proportion
ERIC Educational Resources Information Center
Peled, Irit; Bassan-Cincinatus, Ronit
2005-01-01
In its empirical part this paper establishes a general weak understanding of the process of applying a mathematical model. This is also evident in the way teachers regard the application of alternative sharing in their own problem solving and in relating to children's answers. The theoretical part analyses problems that are considered as…
Coats, D.W.
1984-02-01
Lawrence Livermore National Laboratory (LLNL) has developed wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. In Phase 1, LLNL gathered information on the sites and their critical facilities, including nuclear reactors, fuel-reprocessing plants, high-level waste storage and treatment facilities, and special nuclear material facilities. In Phase 2, development of seismic and wind hazard models, was initiated. These hazard models express the annual probability that the site will experience an earthquake or wind speed greater than some specified magnitude. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. 19 references, 29 figures, 9 tables.
ERIC Educational Resources Information Center
Fujimura, Nobuyuki
2001-01-01
One hundred forty fourth graders were asked to solve proportion problems about juice-mixing situations both before and after an intervention that used a manipulative model or other materials in three experiments. Results indicate different approaches appear to be necessary to facilitate children's proportional reasoning, depending on the reasoning…
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545
Modeling and Hazard Analysis Using STPA
NASA Astrophysics Data System (ADS)
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis
Wind shear modeling for aircraft hazard definition
NASA Technical Reports Server (NTRS)
Frost, W.; Camp, D. W.; Wang, S. T.
1978-01-01
Mathematical models of wind profiles were developed for use in fast time and manned flight simulation studies aimed at defining and eliminating these wind shear hazards. A set of wind profiles and associated wind shear characteristics for stable and neutral boundary layers, thunderstorms, and frontal winds potentially encounterable by aircraft in the terminal area are given. Engineering models of wind shear for direct hazard analysis are presented in mathematical formulae, graphs, tables, and computer lookup routines. The wind profile data utilized to establish the models are described as to location, how obtained, time of observation and number of data points up to 500 m. Recommendations, engineering interpretations and guidelines for use of the data are given and the range of applicability of the wind shear models is described.
ERIC Educational Resources Information Center
Liu, Xing
2008-01-01
The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
NASA Astrophysics Data System (ADS)
Wright, Vince
2014-03-01
Pirie and Kieren (1989 For the learning of mathematics, 9(3)7-11, 1992 Journal of Mathematical Behavior, 11, 243-257, 1994a Educational Studies in Mathematics, 26, 61-86, 1994b For the Learning of Mathematics, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which learners develop their mathematical understanding. The model was adapted to create the teaching model used in the New Zealand Numeracy Development Projects (Ministry of Education, 2007). A case study of a 3-week sequence of instruction with a group of eight 12- and 13-year-old students provided the data. The teacher/researcher used folding back to materials and images and progressing from materials to imaging to number properties to assist students to develop their understanding of frequencies as proportions. The data show that successful implementation of the model is dependent on the teacher noticing and responding to the layers of understanding demonstrated by the students and the careful selection of materials, problems and situations. It supports the use of the model as a useful part of teachers' instructional strategies and the importance of pedagogical content knowledge to the quality of the way the model is used.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.
2010-01-01
Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636
Likelihood approaches for proportional likelihood ratio model with right-censored data.
Zhu, Hong
2014-06-30
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks. PMID:24500821
Application of a hazard-based visual predictive check to evaluate parametric hazard models.
Huh, Yeamin; Hutmacher, Matthew M
2016-02-01
Parametric models used in time to event analyses are evaluated typically by survival-based visual predictive checks (VPC). Kaplan-Meier survival curves for the observed data are compared with those estimated using model-simulated data. Because the derivative of the log of the survival curve is related to the hazard--the typical quantity modeled in parametric analysis--isolation, interpretation and correction of deficiencies in the hazard model determined by inspection of survival-based VPC's is indirect and thus more difficult. The purpose of this study is to assess the performance of nonparametric hazard estimators of hazard functions to evaluate their viability as VPC diagnostics. Histogram-based and kernel-smoothing estimators were evaluated in terms of bias of estimating the hazard for Weibull and bathtub-shape hazard scenarios. After the evaluation of bias, these nonparametric estimators were assessed as a method for VPC evaluation of the hazard model. The results showed that nonparametric hazard estimators performed reasonably at the sample sizes studied with greater bias near the boundaries (time equal to 0 and last observation) as expected. Flexible bandwidth and boundary correction methods reduced these biases. All the nonparametric estimators indicated a misfit of the Weibull model when the true hazard was a bathtub shape. Overall, hazard-based VPC plots enabled more direct interpretation of the VPC results compared to survival-based VPC plots. PMID:26563504
Nevill, A M; Allen, S V; Ingham, S A
2011-02-01
Previous studies have investigated the determinants of indoor rowing using correlations and linear regression. However, the power demands of ergometer rowing are proportional to the cube of the flywheel's (and boat's) speed. A rower's speed, therefore, should be proportional to the cube root (0.33) of power expended. Hence, the purpose of the present study was to explore the relationship between 2000 m indoor rowing speed and various measures of power of 76 elite rowers using proportional, curvilinear allometric models. The best single predictor of 2000 m rowing ergometer performance was power at VO(2max)(WVO(2max))(0.28), that explained R(2)=95.3% in rowing speed. The model realistically describes the greater increment in power required to improve a rower's performance by the same amount at higher speeds compared with that at slower speeds. Furthermore, the fitted exponent, 0.28 (95% confidence interval 0.226-0.334) encompasses 0.33, supporting the assumption that rowing speed is proportional to the cube root of power expended. Despite an R(2)=95.3%, the initial model was unable to explain "sex" and "weight-class" differences in rowing performances. By incorporating anaerobic as well as aerobic determinants, the resulting curvilinear allometric model was common to all rowers, irrespective of sex and weight class. PMID:19883389
Regression model estimation of early season crop proportions: North Dakota, some preliminary results
NASA Technical Reports Server (NTRS)
Lin, K. K. (Principal Investigator)
1982-01-01
To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.
Satellite image collection modeling for large area hazard emergency response
NASA Astrophysics Data System (ADS)
Liu, Shufan; Hodgson, Michael E.
2016-08-01
Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.
ERIC Educational Resources Information Center
Miller, Jane Lincoln; Fey, James T.
2000-01-01
Explores strategies to encourage students' understanding of proportional reasoning. Conducts a study to compare the proportional reasoning of students studying one of the new standards-based curricula with that of students from a control group. (ASK)
A model for the secondary scintillation pulse shape from a gas proportional scintillation counter
NASA Astrophysics Data System (ADS)
Kazkaz, K.; Joshi, T. H.
2016-03-01
Proportional scintillation counters (PSCs), both single- and dual-phase, can measure the scintillation (S1) and ionization (S2) channels from particle interactions within the detector volume. The signal obtained from these detectors depends first on the physics of the medium (the initial scintillation and ionization), and second how the physics of the detector manipulates the resulting photons and liberated electrons. In this paper we develop a model of the detector physics that incorporates event topology, detector geometry, electric field configuration, purity, optical properties of components, and wavelength shifters. We present an analytic form of the model, which allows for general study of detector design and operation, and a Monte Carlo model which enables a more detailed exploration of S2 events. This model may be used to study systematic effects in current detectors such as energy and position reconstruction, pulse shape discrimination, event topology, dead time calculations, purity, and electric field uniformity. We present a comparison of this model with experimental data collected with an argon gas proportional scintillation counter (GPSC), operated at 20 C and 1 bar, and irradiated with an internal, collimated 55Fe source. Additionally we discuss how the model may be incorporated in Monte Carlo simulations of both GPSCs and dual-phase detectors, increasing the reliability of the simulation results and allowing for tests of the experimental data analysis algorithms.
Recent Progress in Modelling the RXTE Proportional Counter Array Instrumental Background
NASA Astrophysics Data System (ADS)
Jahoda, K.; Strohmayer, T. E.; Smith, D. A.; Stark, M. J.
1999-04-01
We present recent progress in the modelling of the instrumental background for the RXTE Proportional Counter Array. Unmodelled systematic errors for faint sources are now <= 0.2 ct/sec/3 PCU in the 2-10 keV band for data selected from the front layer. We present the status of our search for additional correlations. We also present extensions of the times and conditions under which the L7 model is applicable: to early mission times (prior to April 1996) and to sources as bright as ~ 3000 count/sec/detector (comparable to the Crab).
Hazardous gas model evaluation with field observations
NASA Astrophysics Data System (ADS)
Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.
Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas
Lahar Hazard Modeling at Tungurahua Volcano, Ecuador
NASA Astrophysics Data System (ADS)
Sorensen, O. E.; Rose, W. I.; Jaya, D.
2003-04-01
lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.
Minimum risk route model for hazardous materials
Ashtakala, B.; Eno, L.A.
1996-09-01
The objective of this study is to determine the minimum risk route for transporting a specific hazardous material (HM) between a point of origin and a point of destination (O-D pair) in the study area which minimizes risk to population and environment. The southern part of Quebec is chosen as the study area and major cities are identified as points of origin and destination on the highway network. Three classes of HM, namely chlorine gas, liquefied petroleum gas (LPG), and sulfuric acid, are chosen. A minimum risk route model has been developed to determine minimum risk routes between an O-D pair by using population or environment risk units as link impedances. The risk units for each link are computed by taking into consideration the probability of an accident and its consequences on that link. The results show that between the same O-D pair, the minimum risk routes are different for various HM. The concept of risk dissipation from origin to destination on the minimum risk route has been developed and dissipation curves are included.
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
2014-01-01
Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time. PMID:25530753
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Coats, D.W.; Murray, R.C.
1985-08-01
Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. Final seismic hazard models have been published separately by TERA Corporation. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 19 refs., 3 figs., 9 tabs.
a model based on crowsourcing for detecting natural hazards
NASA Astrophysics Data System (ADS)
Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.
2015-12-01
Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.
Schwartz, R S; Huber, K C; Murphy, J G; Edwards, W D; Camrud, A R; Vlietstra, R E; Holmes, D R
1992-02-01
Restenosis is a reparative response to arterial injury occurring with percutaneous coronary revascularization. However, the quantitative characteristics of the relation between vessel injury and the magnitude of restenotic response remain unknown. This study was thus performed to determine the relation between severity of vessel wall injury and the thickness of resulting neointimal proliferation in a porcine model of coronary restenosis. Twenty-six porcine coronary artery segments in 24 pigs were subjected to deep arterial injury with use of overexpanded, percutaneously delivered tantalum wire coils. The vessels were studied microscopically 4 weeks after coil implantation to measure the relation between the extent of injury and the resulting neointimal thickness. For each wire site, a histopathologic score proportional to injury depth and the neointimal thicknesses at that site were determined. Mean injury scores were compared with both mean neointimal thickness and planimetry-derived area percent lumen stenosis. The severity of vessel injury strongly correlated with neointimal thickness and percent diameter stenosis (p less than 0.001). Neointimal proliferation resulting from a given wire was related to injury severity in adjacent wires, suggesting an interaction among effects at injured sites. If the results in this model apply to human coronary arteries, restenosis may depend on the degree of vessel injury sustained during angioplasty. PMID:1732351
2015 USGS Seismic Hazard Model for Induced Seismicity
NASA Astrophysics Data System (ADS)
Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Hoover, S. M.; Ellsworth, W. L.; Llenos, A. L.; Michael, A. J.
2015-12-01
Over the past several years, the seismicity rate has increased markedly in multiple areas of the central U.S. Studies have tied the majority of this increased activity to wastewater injection in deep wells and hydrocarbon production. These earthquakes are induced by human activities that change rapidly based on economic and policy decisions, making them difficult to forecast. Our 2014 USGS National Seismic Hazard Model and previous models are intended to provide the long-term hazard (2% probability of exceedance in 50 years) and are based on seismicity rates and patterns observed mostly from tectonic earthquakes. However, potentially induced earthquakes were identified in 14 regions that were not included in the earthquake catalog used for constructing the 2014 model. We recognized the importance of considering these induced earthquakes in a separate hazard analysis, and as a result in April 2015 we released preliminary models that explored the impact of this induced seismicity on the hazard. Several factors are important in determining the hazard from induced seismicity: period of the catalog that optimally forecasts the next year's activity, earthquake magnitude-rate distribution, earthquake location statistics, maximum magnitude, ground motion models, and industrial drivers such as injection rates. The industrial drivers are not currently available in a form that we can implement in a 1-year model. Hazard model inputs have been evaluated by a broad group of scientists and engineers to assess the range of acceptable models. Results indicate that next year's hazard is significantly higher by more than a factor of three in Oklahoma, Texas, and Colorado compared to the long-term 2014 hazard model. These results have raised concern about the impacts of induced earthquakes on the built environment and have led to many engineering and policy discussions about how to mitigate these effects for the more than 7 million people that live near areas of induced seismicity.
A high-resolution global flood hazard model
NASA Astrophysics Data System (ADS)
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
Natural Phenomena Hazards Modeling Project: Flood hazard models for Department of Energy sites
Savy, J.B.; Murray, R.C.
1988-05-01
For eight sites, the evaluation of flood hazards was considered in two steps. First, a screening assessment was performed to determine whether flood hazards may impact DOE operations. The screening analysis consisted of a preliminary flood hazard assessment that provides an initial estimate of the site design basis. The second step involves a review of the vulnerability of on-site facilities by the site manager; based on the results of the preliminary flood hazard assessment and a review of site operations, the manager can decide whether flood hazards should be considered a part of the design basis. The scope of the preliminary flood hazard analysis was restricted to evaluating the flood hazards that may exist in proximity to a site. The analysis does not involve an assessment of the potential of encroachment of flooding at specific on-site locations. Furthermore, the screening analysis does not consider localized flooding at a site due to precipitation (i.e., local run-off, storm sewer capacity, roof drainage). These issues were reserved for consideration by the DOE site manager. 9 refs., 18 figs.
Xia, Qiang; Kobrak, Paul; Wiewel, Ellen W; Torian, Lucia V
2015-01-01
A static model of undiagnosed and diagnosed HIV infections by year of infection and year of diagnosis was constructed to examine the impact of changes in HIV case-finding and HIV incidence on the proportion of late diagnoses. With no changes in HIV case-finding or incidence, the proportion of late diagnoses in the USA would remain stable at the 2010 level, 32.0%; with a 10% increase in HIV case-finding and no changes in HIV incidence, the estimated proportion of late diagnoses would steadily decrease to 28.1% in 2019; with a 5% annual increase in HIV incidence and no changes in case-finding, the proportion would decrease to 25.2% in 2019; with a 5% annual decrease in HIV incidence and no change in case-finding, the proportion would steadily increase to 33.2% in 2019; with a 10% increase in HIV case-finding, accompanied by a 5% annual decrease in HIV incidence, the proportion would decrease from 32.0% to 30.3% in 2011, and then steadily increase to 35.2% in 2019. In all five scenarios, the proportion of late diagnoses would remain stable after 2019. The stability of the proportion is explained by the definition of the measure itself, as both the numerator and denominator are affected by HIV case-finding making the measure less sensitive. For this reason, we should cautiously interpret the proportion of late diagnoses as a marker of the success or failure of expanding HIV testing programs. PMID:25244628
Babapour, R; Naghdi, R; Ghajar, I; Ghodsi, R
2015-07-01
Rock proportion of subsoil directly influences the cost of embankment in forest road construction. Therefore, developing a reliable framework for rock ratio estimation prior to the road planning could lead to more light excavation and less cost operations. Prediction of rock proportion was subjected to statistical analyses using the application of Artificial Neural Network (ANN) in MATLAB and five link functions of ordinal logistic regression (OLR) according to the rock type and terrain slope properties. In addition to bed rock and slope maps, more than 100 sample data of rock proportion were collected, observed by geologists, from any available bed rock of every slope class. Four predictive models were developed for rock proportion, employing independent variables and applying both the selected probit link function of OLR and Layer Recurrent and Feed forward back propagation networks of Neural Networks. In ANN, different numbers of neurons are considered for the hidden layer(s). Goodness of the fit measures distinguished that ANN models produced better results than OLR with R (2) = 0.72 and Root Mean Square Error = 0.42. Furthermore, in order to show the applicability of the proposed approach, and to illustrate the variability of rock proportion resulted from the model application, the optimum models were applied to a mountainous forest in where forest road network had been constructed in the past. PMID:26092244
Checking Fine and Gray Subdistribution Hazards Model with Cumulative Sums of Residuals
Li, Jianing; Scheike, Thomas H.; Zhang, Mei-Jie
2015-01-01
Recently, Fine and Gray (1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums of residuals, which validate the model in three aspects: (1) proportionality of hazard ratio, (2) the linear functional form and (3) the link function. For each assumption testing, we provide a p-values and a visualized plot against the null hypothesis using a simulation-based approach. We also consider an omnibus test for overall evaluation against any model misspecification. The proposed tests perform well in simulation studies and are illustrated with two real data examples. PMID:25421251
Agent-based Modeling with MATSim for Hazards Evacuation Planning
NASA Astrophysics Data System (ADS)
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Schiller, S. R.; Warren, M. L.; Auslander, D. M.
1980-01-01
Common control strategies used to regulate the flow of liquid through flat-plate solar collectors are discussed and evaluated using a dynamic collector model. Performance of all strategies is compared using different set points, flow rates, insolation levels and patterns (clear and cloudy days), and ambient temperature conditions. The unique characteristic of the dynamic collector model is that it includes effects of collector capacitance. In general, capacitance has a minimal effect on long term collector performance; however, short term temperature response and the energy =storage capability of collector capacitance are shown to play significant roles in comparing on/off and proportional controllers. Inclusion of these effects has produced considerably more realistic simulations than any generated by steady-state models. Simulations indicate relative advantages and disadvantages of both types of controllers, conditions under which each performs better, and the importance of pump cycling and controller set points on total energy collection. Results show that the turn-on set point is not always a critical factor in energy collection since collectors store energy while they warm up and during cycling; and, that proportional flow controllers provide improved energy collection only during periods of interrupted or very low insolation when the maximum possible energy collection is rela= tively low. Although proportional controllers initiate flow ·at lower insolation levels than on/off controllers, proportional controllers produce lower flow rates and higher average collector temperatures resulting in slightly lower instantaneous collection efficiencies.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
Li Yupeng Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.
Three multimedia models used at hazardous and radioactive waste sites
1996-01-01
The report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. The study focused on three specific models: MEPAS version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. The approach to model review advocated in the study is directed to technical staff responsible for identifying, selecting and applying multimedia models for use at sites containing radioactive and hazardous materials. In the report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted.
Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John
2015-01-01
Objectives To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. Design A retrospective database analysis. Setting General practices in the UK and Ireland. Participants 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Main outcome measures Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. Results CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the ‘postanalytical test stage’ (n=702, 43.8%), followed closely by ‘communication outcomes issues’ (n=628, 39.1%). Conclusions Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. PMID:26614621
Simulation meets reality: Chemical hazard models in real world use
Newsom, D.E.
1992-01-01
In 1989 the US Department of Transportation (DOT), Federal Emergency Management Agency (FEMA), and US Environmental Protection Agency (EPA) released a set of models for analysis of chemical hazards on personal computers. The models, known collectively as ARCHIE (Automated Resource for Chemical Hazard Incident Evaluation), have been distributed free of charge to thousands of emergency planners and analysts in state governments, Local Emergency Planning Committees (LEPCs), and industry. Under DOT and FEMA sponsorship Argonne National Laboratory (ANL) conducted workshops in 1990 and 1991 to train federal state local government, and industry personnel, both end users and other trainers, in the use of the models. As a result of these distribution and training efforts ARCHIE has received substantial use by state, local and industrial emergency management personnel.
McCann, M.W. Jr.; Boissonnade, A.C.
1988-05-01
As part of an ongoing program, Lawrence Livermore National Laboratory (LLNL) is directing the Natural Phenomena Hazards Modeling Project (NPHMP) on behalf of the Department of Energy (DOE). A major part of this effort is the development of probabilistic definitions of natural phenomena hazards; seismic, wind, and flood. In this report the first phase of the evaluation of flood hazards at DOE sites is described. Unlike seismic and wind events, floods may not present a significant threat to the operations of all DOE sites. For example, at some sites physical circumstances may exist that effectively preclude the occurrence of flooding. As a result, consideration of flood hazards may not be required as part of the site design basis. In this case it is not necessary to perform a detailed flood hazard study at all DOE sites, such as those conducted for other natural phenomena hazards, seismic and wind. The scope of the preliminary flood hazard analysis is restricted to evaluating the flood hazards that may exist in proximity to a site. The analysis does involve an assessment of the potential encroachment of flooding on-site at individual facility locations. However, the preliminary flood hazard assessment does not consider localized flooding at a site due to precipitation (i.e., local run-off, storm sewer capacity, roof drainage). These issues are reserved for consideration by the DOE site manager. 11 refs., 84 figs., 61 tabs.
Rockfall hazard analysis using LiDAR and spatial modeling
NASA Astrophysics Data System (ADS)
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
New class of Johnson SB distributions and its associated regression model for rates and proportions.
Lemonte, Artur J; Bazán, Jorge L
2016-07-01
By starting from the Johnson SB distribution pioneered by Johnson (), we propose a broad class of distributions with bounded support on the basis of the symmetric family of distributions. The new class of distributions provides a rich source of alternative distributions for analyzing univariate bounded data. A comprehensive account of the mathematical properties of the new family is provided. We briefly discuss estimation of the model parameters of the new class of distributions based on two estimation methods. Additionally, a new regression model is introduced by considering the distribution proposed in this article, which is useful for situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. The regression model allows to model both location and dispersion effects. We define two residuals for the proposed regression model to assess departures from model assumptions as well as to detect outlying observations, and discuss some influence methods such as the local influence and generalized leverage. Finally, an application to real data is presented to show the usefulness of the new regression model. PMID:26659998
Disproportionate Proximity to Environmental Health Hazards: Methods, Models, and Measurement
Maantay, Juliana A.; Brender, Jean D.
2011-01-01
We sought to provide a historical overview of methods, models, and data used in the environmental justice (EJ) research literature to measure proximity to environmental hazards and potential exposure to their adverse health effects. We explored how the assessment of disproportionate proximity and exposure has evolved from comparing the prevalence of minority or low-income residents in geographic entities hosting pollution sources and discrete buffer zones to more refined techniques that use continuous distances, pollutant fate-and-transport models, and estimates of health risk from toxic exposure. We also reviewed analytical techniques used to determine the characteristics of people residing in areas potentially exposed to environmental hazards and emerging geostatistical techniques that are more appropriate for EJ analysis than conventional statistical methods. We concluded by providing several recommendations regarding future research and data needs for EJ assessment that would lead to more reliable results and policy solutions. PMID:21836113
Richardson, David B.; Laurier, Dominique; Schubauer-Berigan, Mary K.; Tchetgen, Eric Tchetgen; Cole, Stephen R.
2014-01-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950–2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950–2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer—a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. PMID:25245043
Recent Experiences in Aftershock Hazard Modelling in New Zealand
NASA Astrophysics Data System (ADS)
Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.
2014-12-01
The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
Simple model relating recombination rates and non-proportional light yield in scintillators
Moses, William W.; Bizarri, Gregory; Singh, Jai; Vasil'ev, Andrey N.; Williams, Richard T.
2008-09-24
We present a phenomenological approach to derive an approximate expression for the local light yield along a track as a function of the rate constants of different kinetic orders of radiative and quenching processes for excitons and electron-hole pairs excited by an incident {gamma}-ray in a scintillating crystal. For excitons, the radiative and quenching processes considered are linear and binary, and for electron-hole pairs a ternary (Auger type) quenching process is also taken into account. The local light yield (Y{sub L}) in photons per MeV is plotted as a function of the deposited energy, -dE/dx (keV/cm) at any point x along the track length. This model formulation achieves a certain simplicity by using two coupled rate equations. We discuss the approximations that are involved. There are a sufficient number of parameters in this model to fit local light yield profiles needed for qualitative comparison with experiment.
Modelling public risk evaluation of natural hazards: a conceptual approach
NASA Astrophysics Data System (ADS)
Plattner, Th.
2005-04-01
In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
Development of hazard-compatible building fragility and vulnerability models
Karaca, E.; Luco, N.
2008-01-01
We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.
FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality
Aberg, Daniel; Sadigh, Babak; Zhou, Fei
2015-01-01
This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder. PMID:17513858
Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Giustarini, Laura; Chini, Marci; Hostache, Renaud; Matgen, Patrick; Pappenberger, Florian; Bally, Phillippe
2015-04-01
With flood consequences likely to amplify because of growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are greatly needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method is presented to integrate global flood inundation modeling and microwave remote sensing. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers the opportunity to estimate flood non-exceedance probabilities in a robust way. The probabilities can later be attributed to historical satellite observations. SAR-derived flood extent maps with their associated non-exceedance probabilities are then combined to generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. The method can be applied to any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. We applied the method on the Severn River (UK) and on the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. An additional analysis has been performed on the Severn River, using high resolution SAR data from the COSMO-SkyMed SAR constellation, acquired for a single flood event (one flood map per day between 27/11/2012 and 4/12/2012). The results showed that it is vital to observe the peak of the flood. However, a single
Statistical modeling of ground motion relations for seismic hazard analysis
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2013-10-01
We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area equivalence, wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp( ɛ 0) of Joyner and Boore, Bull Seism Soc Am 83(2):469-487, 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions, which indicate the appropriate modeling of the GMR by an anisotropic
Conveying Lava Flow Hazards Through Interactive Computer Models
NASA Astrophysics Data System (ADS)
Thomas, D.; Edwards, H. K.; Harnish, E. P.
2007-12-01
As part of an Information Sciences senior class project, a software package of an interactive version of the FLOWGO model was developed for the Island of Hawaii. The software is intended for use in an ongoing public outreach and hazards awareness program that educates the public about lava flow hazards on the island. The design parameters for the model allow an unsophisticated user to initiate a lava flow anywhere on the island and allow it to flow down-slope to the shoreline while displaying a timer to show the rate of advance of the flow. The user is also able to modify a range of input parameters including eruption rate, the temperature of the lava at the vent, and crystal fraction present in the lava at the source. The flow trajectories are computed using a 30 m digital elevation model for the island and the rate of advance of the flow is estimated using the average slope angle and the computed viscosity of the lava as it cools in either a channel (high heat loss) or lava tube (low heat loss). Even though the FLOWGO model is not intended to, and cannot, accurately predict the rate of advance of a tube- fed or channel-fed flow, the relative rates of flow advance for steep or flat-lying terrain convey critically important hazard information to the public: communities located on the steeply sloping western flanks of Mauna Loa may have no more than a few hours to evacuate in the face of a threatened flow from Mauna Loa's southwest rift whereas communities on the more gently sloping eastern flanks of Mauna Loa and Kilauea may have weeks to months to prepare for evacuation. Further, the model also can show the effects of loss of critical infrastructure with consequent impacts on access into and out of communities, loss of electrical supply, and communications as a result of lava flow implacement. The interactive model has been well received in an outreach setting and typically generates greater involvement by the participants than has been the case with static maps
A multimodal location and routing model for hazardous materials transportation.
Xie, Yuanchang; Lu, Wei; Wang, Wen; Quadrifoglio, Luca
2012-08-15
The recent US Commodity Flow Survey data suggest that transporting hazardous materials (HAZMAT) often involves multiple modes, especially for long-distance transportation. However, not much research has been conducted on HAZMAT location and routing on a multimodal transportation network. Most existing HAZMAT location and routing studies focus exclusively on single mode (either highways or railways). Motivated by the lack of research on multimodal HAZMAT location and routing and the fact that there is an increasing demand for it, this research proposes a multimodal HAZMAT model that simultaneously optimizes the locations of transfer yards and transportation routes. The developed model is applied to two case studies of different network sizes to demonstrate its applicability. The results are analyzed and suggestions for future research are provided. PMID:22633882
Natural hazard resilient cities: the case of a SSMS model
NASA Astrophysics Data System (ADS)
Santos-Reyes, Jaime
2010-05-01
Modern society is characterised by complexity; i.e. technical systems are highly complex and highly interdependent. The nature of the interdependence amongst these systems has become an issue on increasing importance in recent years. Moreover, these systems face a number threats ranging from technical, human and natural. For example, natural hazards (earthquakes, floods, heavy snow, etc) can cause significant problems and disruption to normal life. On the other hand, modern society depends on highly interdependent infrastructures such as transport (rail, road, air, etc), telecommunications, power and water supply, etc. Furthermore, in many cases there is no single owner, operator, and regulator of such systems. Any disruption in any of the interconnected systems may cause a domino-effect. The domino-effect may occur at local, regional or at national level; or, in some cases; it may be extended across international borders. Given the above, it may be argued that society is less resilient to such events and therefore there is a need to have a system in place able to maintain risk within an acceptable range, whatever that might be. This paper presents the modelling process of the interdependences amongst "critical infrastructures" (i.e. transport, telecommunications, power & water supply, etc) for a typical city. The approach has been the application of the developed Systemic Safety Management System (SSMS) model. The main conclusion is that the SSMS model has the potentiality to be used to model interdependencies amongst the so called "critical infrastructures". It is hoped that the approach presented in this paper may help to gain a better understanding of the interdependence amongst these systems and may contribute to a resilient society when disrupted by natural hazards.
Lava flow hazard at Nyiragongo volcano, D.R.C.. 1. Model calibration and hazard mapping
NASA Astrophysics Data System (ADS)
Favalli, Massimiliano; Chirico, Giuseppe D.; Papale, Paolo; Pareschi, Maria Teresa; Boschi, Enzo
2009-05-01
The 2002 eruption of Nyiragongo volcano constitutes the most outstanding case ever of lava flow in a big town. It also represents one of the very rare cases of direct casualties from lava flows, which had high velocities of up to tens of kilometer per hour. As in the 1977 eruption, which is the only other eccentric eruption of the volcano in more than 100 years, lava flows were emitted from several vents along a N-S system of fractures extending for more than 10 km, from which they propagated mostly towards Lake Kivu and Goma, a town of about 500,000 inhabitants. We assessed the lava flow hazard on the entire volcano and in the towns of Goma (D.R.C.) and Gisenyi (Rwanda) through numerical simulations of probable lava flow paths. Lava flow paths are computed based on the steepest descent principle, modified by stochastically perturbing the topography to take into account the capability of lava flows to override topographic obstacles, fill topographic depressions, and spread over the topography. Code calibration and the definition of the expected lava flow length and vent opening probability distributions were done based on the 1977 and 2002 eruptions. The final lava flow hazard map shows that the eastern sector of Goma devastated in 2002 represents the area of highest hazard on the flanks of the volcano. The second highest hazard sector in Goma is the area of propagation of the western lava flow in 2002. The town of Gisenyi is subject to moderate to high hazard due to its proximity to the alignment of fractures active in 1977 and 2002. In a companion paper (Chirico et al., Bull Volcanol, in this issue, 2008) we use numerical simulations to investigate the possibility of reducing lava flow hazard through the construction of protective barriers, and formulate a proposal for the future development of the town of Goma.
Preliminary deformation model for National Seismic Hazard map of Indonesia
Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z.; Susilo,; Efendi, Joni
2015-04-24
Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis
2016-01-01
There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis
2016-01-01
There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586
Some Proposed Modifications to the 1996 California Probabilistic Hazard Model
NASA Astrophysics Data System (ADS)
Cao, T.; Bryant, W. A.; Rowshandel, B.; Toppozada, T.; Reichle, M. S.; Petersen, M. D.; Frankel, A. D.
2001-12-01
The California Department of Conservation, Division of Mines and Geology and U. S. Geological Survey are working on the revision of the 1996 California Probabilistic hazard model. Since the release of this hazard model some of the new seismological and geological studies and observations in this area have provided the basis for the revision. Important considerations of model modifications include the following: 1. using a new bilinear fault area-magnitude relation to replace the Wells and Coppersmith (1994) relation for M greater than and equal to 7.0; 2. using the Gaussian function to replace the Dirac Delta function for characteristic magnitude; 3. updating the earthquake catalog with the new M greater than and equal to 5.5 catalog from 1800 to 1999 by Toppozada et al. (2000) and the Berkeley and Caltech catalogs for 1996-2001; 4. balancing the moment release for some major A type faults; 5. adding Abrahamson and Silva attention relation with new hanging wall term; 6. considering different ratios between characteristic and Gutenberg-Richter magnitude-frequency distributions other than 50 percent and 50 percent; 7. using Monte Carlo method to sample the logic tree to produce uncertainty map of coefficient of variation (COV); 8. separating background seismicity in the vicinity of faults from other areas for different smoothing process or no smoothing at all, especially for the creeping section of the San Andreas fault and the Brawley seismic zone; 9. using near-fault variability of attenuation relations to mimic directivity; 10. modifying slip-rates for the Concord-Green Valley, Sierra Madre, and Raymond faults, adding or modifying blind thrust faults mainly in the Los Angeles Basin. These possible changes were selected with input received during several workshops that included participation of geologists and seismologists familiar with the area of concern. With the above revisions and other changes, we expect that the new model should not differ greatly from the
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research&Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorist's actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Hydraulic modeling for lahar hazards at cascades volcanoes
Costa, J.E.
1997-01-01
The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.
MEASUREMENTS AND MODELS FOR HAZARDOUS CHEMICAL AND MIXED WASTES
Mixed hazardous and low-level radioactive wastes are in storage at DOE sites around the United States, awaiting treatment and disposal. These hazardous chemical wastes contain many components in multiple phases, presenting very difficult handling and treatment problems. These was...
Landslide-Generated Tsunami Model for Quick Hazard Assessment
NASA Astrophysics Data System (ADS)
Franz, M.; Rudaz, B.; Locat, J.; Jaboyedoff, M.; Podladchikov, Y.
2015-12-01
Alpine regions are likely to be areas at risk regarding to landslide-induced tsunamis, because of the proximity between lakes and potential instabilities and due to the concentration of the population in valleys and on the lakes shores. In particular, dam lakes are often surrounded by steep slopes and frequently affect the stability of the banks. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a 2.5D numerical model which aims to simulate the propagation of the landslide, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. To perform this task, the process is done in three steps. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The proper behavior of our model is demonstrated by; (1) numerical tests from Toro (2001), and (2) by comparison with a real event where the horizontal run-up distance is known (Nicolet landslide, Quebec, Canada). The model is of particular interest due to its ability to perform quickly the 2.5D geometric model of the landslide, the tsunami simulation and, consequently, the hazard assessment.
Research collaboration, hazard modeling and dissemination in volcanology with Vhub
NASA Astrophysics Data System (ADS)
Palma Lizana, J. L.; Valentine, G. A.
2011-12-01
Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-01-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen's neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563
NASA Astrophysics Data System (ADS)
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-06-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-01-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
ERIC Educational Resources Information Center
Wright, Vince
2014-01-01
Pirie and Kieren (1989 "For the learning of mathematics", 9(3)7-11, 1992 "Journal of Mathematical Behavior", 11, 243-257, 1994a "Educational Studies in Mathematics", 26, 61-86, 1994b "For the Learning of Mathematics":, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which…
NASA Astrophysics Data System (ADS)
Wang, Jun-Song; Wang, Mei-Li; Li, Xiao-Li; Ernst, Niebur
2015-03-01
Epilepsy is believed to be caused by a lack of balance between excitation and inhibitation in the brain. A promising strategy for the control of the disease is closed-loop brain stimulation. How to determine the stimulation control parameters for effective and safe treatment protocols remains, however, an unsolved question. To constrain the complex dynamics of the biological brain, we use a neural population model (NPM). We propose that a proportional-derivative (PD) type closed-loop control can successfully suppress epileptiform activities. First, we determine the stability of root loci, which reveals that the dynamical mechanism underlying epilepsy in the NPM is the loss of homeostatic control caused by the lack of balance between excitation and inhibition. Then, we design a PD type closed-loop controller to stabilize the unstable NPM such that the homeostatic equilibriums are maintained; we show that epileptiform activities are successfully suppressed. A graphical approach is employed to determine the stabilizing region of the PD controller in the parameter space, providing a theoretical guideline for the selection of the PD control parameters. Furthermore, we establish the relationship between the control parameters and the model parameters in the form of stabilizing regions to help understand the mechanism of suppressing epileptiform activities in the NPM. Simulations show that the PD-type closed-loop control strategy can effectively suppress epileptiform activities in the NPM. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473208, 61025019, and 91132722), ONR MURI N000141010278, and NIH grant R01EY016281.
The influence of mapped hazards on risk beliefs: A proximity-based modeling approach
Severtson, Dolores J.; Burt, James E.
2013-01-01
Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated “you live here” location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, e.g. distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples. PMID:22053748
Evaluating the hazard from Siding Spring dust: Models and predictions
NASA Astrophysics Data System (ADS)
Christou, A.
2014-12-01
Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.
Modelling Inland Flood Events for Hazard Maps in Taiwan
NASA Astrophysics Data System (ADS)
Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.
2015-12-01
Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Mayer, Adam; Foster, Michelle
2015-01-01
Introduction Self-rated health is demonstrated to vary substantially by both personal socio-economic status and national economic conditions. However, studies investigating the combined influence of individual and country level economic indicators across several countries in the context of recent global recession are limited. This paper furthers our knowledge of the effect of recession on health at both the individual and national level. Methods Using the Life in Transition II study, which provides data from 19,759 individuals across 26 European nations, we examine the relationship between self-rated health, personal economic experiences, and macro-economic change. Data analyses include, but are not limited to, the partial proportional odds model which permits the effect of predictors to vary across different levels of our dependent variable. Results Household experiences with recession, especially a loss of staple good consumption, are associated with lower self-rated health. Most individual-level experiences with recession, such as a job loss, have relatively small negative effects on perceived health; the effect of individual or household economic hardship is strongest in high income nations. Our findings also suggest that macroeconomic growth improves self-rated health in low-income nations but has no effect in high-income nations. Individuals with the greatest probability of “good” self-rated health reside in wealthy countries ($23,910 to $50, 870 GNI per capita). Conclusion Both individual and national economic variables are predictive of self-rated health. Personal and household experiences are most consequential for self-rated health in high income nations, while macroeconomic growth is most consequential in low-income nations. PMID:26513660
Conceptual geoinformation model of natural hazards risk assessment
NASA Astrophysics Data System (ADS)
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
Modelling the costs of natural hazards in games
NASA Astrophysics Data System (ADS)
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban
NASA Astrophysics Data System (ADS)
Tjoe, Hartono; de la Torre, Jimmy
2014-06-01
In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the deliberation and resolution of differing views by mathematics researchers, mathematics educators, and middle school mathematics teachers of what should be learned theoretically and what can be taught practically in everyday classroom settings. We also present the initial development of proportional reasoning items as part of the two-phase validation process of the previously identified attributes. In particular, we detail in the first phase of the validation process our collaboration with middle school mathematics teachers in the creation of prototype items and the verification of each item-attribute specification in consideration of the most common ways (among many different ways) in which middle school students would have solved these prototype items themselves. In the second phase of the validation process, we elaborate our think-aloud interview procedure in the search for evidence of whether students generally solved the prototype items in the way they were expected to.
ERIC Educational Resources Information Center
2003
This study analyzed the economic benefits of an increase in the proportion of Australian students achieving a 12th-grade equivalent education. Earlier research examined the direct costs and benefits of a program that increased 12th grade equivalent education for the five-year cohort 2003-2007. This study built on that by incorporating the indirect…
Expert elicitation for a national-level volcano hazard model
NASA Astrophysics Data System (ADS)
Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill
2016-04-01
The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.
Strip Diagrams: Illuminating Proportions
ERIC Educational Resources Information Center
Cohen, Jessica S.
2013-01-01
Proportional reasoning is both complex and layered, making it challenging to define. Lamon (1999) identified characteristics of proportional thinkers, such as being able to understand covariance of quantities; distinguish between proportional and nonproportional relationships; use a variety of strategies flexibly, most of which are nonalgorithmic,…
NASA Astrophysics Data System (ADS)
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
NASA Astrophysics Data System (ADS)
Blahut, J.; Horton, P.; Sterlacchini, S.; Jaboyedoff, M.
2010-11-01
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
Modelling the costs of natural hazards in games
NASA Astrophysics Data System (ADS)
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such
Large animal model for health hazard assessment of environmental pollutants
Chanana, A.D.; Joel, D.D.; Costa, D.L.; Janoff, A.; Susskind, H.; Weiss, R.A.
1984-01-01
The requirements of large animals for the experimental assessment of human health hazards associated with inhaled pollutants are discussed. Results from studies designed to elucidate mechanisms controlling pulmonary function at the organismal, cellular and molecular level are presented. It is shown that studies in large animals permit technically sophisticated approaches not feasible in small animals and not permissible in man. Use of large animals also permits serial, non-invasive determinations of structural and functional changes which may be of temporal importance. 6 references.
A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards
NASA Astrophysics Data System (ADS)
Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.
Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show
Stirling, M.; Petersen, M.
2006-01-01
We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.
NASA Astrophysics Data System (ADS)
Dube, F.; Nhapi, I.; Murwira, A.; Gumindoga, W.; Goldin, J.; Mashauri, D. A.
Gully erosion is an environmental concern particularly in areas where landcover has been modified by human activities. This study assessed the extent to which the potential of gully erosion could be successfully modelled as a function of seven environmental factors (landcover, soil type, distance from river, distance from road, Sediment Transport Index (STI), Stream Power Index (SPI) and Wetness Index (WI) using a GIS-based Weight of Evidence Modelling (WEM) in the Mbire District of Zimbabwe. Results show that out of the studied seven factors affecting gully erosion, five were significantly correlated (p < 0.05) to gully occurrence, namely; landcover, soil type, distance from river, STI and SPI. Two factors; WI and distance from road were not significantly correlated to gully occurrence (p > 0.05). A gully erosion hazard map showed that 78% of the very high hazard class area is within a distance of 250 m from rivers. Model validation indicated that 70% of the validation set of gullies were in the high hazard and very high hazard class. The resulting map of areas susceptible to gully erosion has a prediction accuracy of 67.8%. The predictive capability of the weight of evidence model in this study suggests that landcover, soil type, distance from river, STI and SPI are useful in creating a gully erosion hazard map but may not be sufficient to produce a valid map of gully erosion hazard.
Coincidence Proportional Counter
Manley, J H
1950-11-21
A coincidence proportional counter having a plurality of collecting electrodes so disposed as to measure the range or energy spectrum of an ionizing particle-emitting source such as an alpha source, is disclosed.
Snakes as hazards: modelling risk by chasing chimpanzees.
McGrew, William C
2015-04-01
Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality. PMID:25600837
NASA Astrophysics Data System (ADS)
Liu, Y.; Guo, H. C.; Zou, R.; Wang, L. J.
2006-04-01
This paper presents a neural network (NN) based model to assess the regional hazard degree of debris flows in Lake Qionghai Watershed, China. The NN model was used as an alternative for the more conventional linear model MFCAM (multi-factor composite assessment model) in order to effectively handle the nonlinearity and uncertainty inherent in the debris flow hazard analysis. The NN model was configured using a three layer structure with eight input nodes and one output node, and the number of nodes in the hidden layer was determined through an iterative process of varying the number of nodes in the hidden layer until an optimal performance was achieved. The eight variables used to represent the eight input nodes include density of debris flow gully, degree of weathering of rocks, active fault density, area percentage of slope land greater than 25° of the total land (APL25), frequency of flooding hazards, average covariance of monthly precipitation by 10 years (ACMP10), average days with rainfall >25 mm by 10 years (25D10Y), and percentage of cultivated land with slope land greater than 25° of the total cultivated land (PCL25). The output node represents the hazard-degree ranks (HDR). The model was trained with the 35 sets of data obtained from previous researches reported in literatures, and an explicit uncertainty analysis was undertaken to address the uncertainty in model training and prediction. Before the NN model is extrapolated to Lake Qionghai Watershed, a validation case, different from the above data, is conducted. In addition, the performances of the NN model and the MFCAM were compared. The NN model predicted that the HDRs of the five sub-watersheds in the Lake Qionghai Watershed were IV, IV, III, III, and IV V, indicating that the study area covers normal hazard and severe hazard areas. Based on the NN model results, debris flow management and economic development strategies in the study are proposed for each sub-watershed.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
NASA Astrophysics Data System (ADS)
Loughlin, Susan
2013-04-01
GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.
Uys, Pieter W; van Helden, Paul D; Hargrove, John W
2008-01-01
In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, ρ (the reinfection factor), of the rate of first-time infection, λ. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of ρ is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502
Uys, Pieter W; van Helden, Paul D; Hargrove, John W
2009-01-01
In a significant number of instances, an episode of tuberculosis can be attributed to a reinfection event. Because reinfection is more likely in high incidence regions than in regions of low incidence, more tuberculosis (TB) cases due to reinfection could be expected in high-incidence regions than in low-incidence regions. Empirical data from regions with various incidence rates appear to confirm the conjecture that, in fact, the incidence rate due to reinfection only, as a proportion of all cases, correlates with the logarithm of the incidence rate, rather than with the incidence rate itself. A theoretical model that supports this conjecture is presented. A Markov model was used to obtain a relationship between incidence and reinfection rates. It was assumed in this model that the rate of reinfection is a multiple, rho (the reinfection factor), of the rate of first-time infection, lambda. The results obtained show a relationship between the proportion of cases due to reinfection and the rate of incidence that is approximately logarithmic for a range of values of the incidence rate typical of those observed in communities across the globe. A value of rho is determined such that the relationship between the proportion of cases due to reinfection and the logarithm of the incidence rate closely correlates with empirical data. From a purely theoretical investigation, it is shown that a simple relationship can be expected between the logarithm of the incidence rates and the proportions of cases due to reinfection after a prior episode of TB. This relationship is sustained by a rate of reinfection that is higher than the rate of first-time infection and this latter consideration underscores the great importance of monitoring recovered TB cases for repeat disease episodes, especially in regions where TB incidence is high. Awareness of this may assist in attempts to control the epidemic. PMID:18577502
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
Adaptation through proportion.
Xiong, Liyang; Shi, Wenjia; Tang, Chao
2016-01-01
Adaptation is a ubiquitous feature in biological sensory and signaling networks. It has been suggested that adaptive systems may follow certain simple design principles across diverse organisms, cells and pathways. One class of networks that can achieve adaptation utilizes an incoherent feedforward control, in which two parallel signaling branches exert opposite but proportional effects on the output at steady state. In this paper, we generalize this adaptation mechanism by establishing a steady-state proportionality relationship among a subset of nodes in a network. Adaptation can be achieved by using any two nodes in the sub-network to respectively regulate the output node positively and negatively. We focus on enzyme networks and first identify basic regulation motifs consisting of two and three nodes that can be used to build small networks with proportional relationships. Larger proportional networks can then be constructed modularly similar to LEGOs. Our method provides a general framework to construct and analyze a class of proportional and/or adaptation networks with arbitrary size, flexibility and versatile functional features. PMID:27526863
The influence of hazard models on GIS-based regional risk assessments and mitigation policies
Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.
2006-01-01
Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Structural estimation of a principal-agent model: moral hazard in medical insurance.
Vera-Hernández, Marcos
2003-01-01
Despite the importance of principal-agent models in the development of modern economic theory, there are few estimations of these models. I recover the estimates of a principal-agent model and obtain an approximation to the optimal contract. The results show that out-of-pocket payments follow a concave profile with respect to costs of treatment. I estimate the welfare loss due to moral hazard, taking into account income effects. I also propose a new measure of moral hazard based on the conditional correlation between contractible and noncontractible variables. PMID:15025029
Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America
NASA Astrophysics Data System (ADS)
Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.
2014-12-01
Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Eble, M. C.
2013-12-01
The U.S. National Tsunami Hazard Mitigation Program (NTHMP) is comprised of representatives from coastal states and federal agencies who, under the guidance of NOAA, work together to develop protocols and products to help communities prepare for and mitigate tsunami hazards. Within the NTHMP are several subcommittees responsible for complimentary aspects of tsunami assessment, mitigation, education, warning, and response. The Mapping and Modeling Subcommittee (MMS) is comprised of state and federal scientists who specialize in tsunami source characterization, numerical tsunami modeling, inundation map production, and warning forecasting. Until September 2012, much of the work of the MMS was authorized through the Tsunami Warning and Education Act, an Act that has since expired but the spirit of which is being adhered to in parallel with reauthorization efforts. Over the past several years, the MMS has developed guidance and best practices for states and territories to produce accurate and consistent tsunami inundation maps for community level evacuation planning, and has conducted benchmarking of numerical inundation models. Recent tsunami events have highlighted the need for other types of tsunami hazard analyses and products for improving evacuation planning, vertical evacuation, maritime planning, land-use planning, building construction, and warning forecasts. As the program responsible for producing accurate and consistent tsunami products nationally, the NTHMP-MMS is initiating a multi-year plan to accomplish the following: 1) Create and build on existing demonstration projects that explore new tsunami hazard analysis techniques and products, such as maps identifying areas of strong currents and potential damage within harbors as well as probabilistic tsunami hazard analysis for land-use planning. 2) Develop benchmarks for validating new numerical modeling techniques related to current velocities and landslide sources. 3) Generate guidance and protocols for
Three multimedia models used at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.
1996-02-01
Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.
NASA Astrophysics Data System (ADS)
Kim, B.; David, C. H.; Druffel-Rodriguez, R.; Sanders, B. F.; Famiglietti, J. S.
2013-12-01
The City of Sacramento and the broader delta region may be the most flood vulnerable urbanized area in the United States. Management of flood risk here and elsewhere requires an understanding of flooding hazards, which is in turn linked to California hydrology, climate, development and flood control infrastructure. A modeling framework is presented here to make predictions of flooding hazards (e.g., depth and velocity) at the household scale (personalized flood risk information), and to study how these predictions could change under different climate change, land-use change, and infrastructure adaptation scenarios. The framework couples a statewide hydrologic model (RAPID) that predicts runoff and streamflow to a city-scale hydrodynamic model (BreZo) capable of predicting levee-breach flows and overland flows into urbanized lowlands. Application of the framework to the Sacramento area is presented here, with a focus on data needs, computational demands, results and hazard communication strategies, for selected flooding scenarios.
Goode, Colleen J; Preheim, Gayle J; Bonini, Susan; Case, Nancy K; VanderMeer, Jennifer; Iannelli, Gina
2016-01-01
This manuscript describes a collaborative, seamless program between a community college and a university college of nursing designed to increase the number of nurses prepared with a baccalaureate degree. The three-year Integrated Nursing Pathway provides community college students with a non-nursing associate degree, early introduction to nursing, and seamless progression through BSN education. The model includes dual admission and advising and is driven by the need for collaboration with community colleges, the need to increase the percentage of racial-ethnic minority students, the shortage of faculty, and employer preferences for BSN graduates. PMID:27209872
ERIC Educational Resources Information Center
Markworth, Kimberly A.
2012-01-01
Students may be able to set up a relevant proportion and solve through cross multiplication. However, this ability may not reflect the desired mathematical understanding of the covarying relationship that exists between two variables or the equivalent relationship that exists between two ratios. Students who lack this understanding are likely to…
Selecting Proportional Reasoning Tasks
ERIC Educational Resources Information Center
de la Cruz, Jessica A.
2013-01-01
With careful consideration given to task selection, students can construct their own solution strategies to solve complex proportional reasoning tasks while the teacher's instructional goals are still met. Several aspects of the tasks should be considered including their numerical structure, context, difficulty level, and the strategies they are…
ERIC Educational Resources Information Center
Snider, Richard G.
1985-01-01
The ratio factors approach involves recognizing a given fraction, then multiplying so that units cancel. This approach, which is grounded in concrete operational thinking patterns, provides a standard for science ratio and proportion problems. Examples are included for unit conversions, mole problems, molarity, speed/density problems, and…
Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment
Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank
2008-11-01
Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application
Modeling exposure to persistent chemicals in hazard and risk assessment.
Cowan-Ellsberry, Christina E; McLachlan, Michael S; Arnot, Jon A; Macleod, Matthew; McKone, Thomas E; Wania, Frank
2009-10-01
Fate and exposure modeling has not, thus far, been explicitly used in the risk profile documents prepared for evaluating the significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of persistent organic pollutants (POP) and persistent, bioaccumulative, and toxic (PBT) chemicals in the environment. The goal of this publication is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include 1) benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk; 2) directly estimating the exposure of the environment, biota, and humans to provide information to complement measurements or where measurements are not available or are limited; 3) to identify the key processes and chemical or environmental parameters that determine the exposure, thereby allowing the effective prioritization of research or measurements to improve the risk profile; and 4) forecasting future time trends, including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and
NASA Astrophysics Data System (ADS)
Vidar Vangelsten, Bjørn; Fornes, Petter; Cepeda, Jose Mauricio; Ekseth, Kristine Helene; Eidsvig, Unni; Ormukov, Cholponbek
2015-04-01
Landslides are a significant threat to human life and the built environment in many parts of Central Asia. To improve understanding of the magnitude of the threat and propose appropriate risk mitigation measures, landslide hazard mapping is needed both at regional and local level. Many different approaches for landslide hazard mapping exist depending on the scale and purpose of the analysis and what input data are available. This paper presents a probabilistic local scale landslide hazard mapping methodology for rainfall triggered landslides, adapted to the relatively dry climate found in South-Western Kyrgyzstan. The GIS based approach makes use of data on topography, geology, land use and soil characteristics to assess landslide susceptibility. Together with a selected rainfall scenario, these data are inserted into a triggering model based on an infinite slope formulation considering pore pressure and suction effects for unsaturated soils. A statistical model based on local landslide data has been developed to estimate landslide run-out. The model links the spatial extension of the landslide to land use and geological features. The model is tested and validated for the town of Suluktu in the Ferghana Valley in South-West Kyrgyzstan. Landslide hazard is estimated for the urban area and the surrounding hillsides. The case makes use of a range of data from different sources, both remote sensing data and in-situ data. Public global data sources are mixed with case specific data obtained from field work. The different data and models have various degrees of uncertainty. To account for this, the hazard model has been inserted into a Monte Carlo simulation framework to produce a probabilistic landslide hazard map identifying areas with high landslide exposure. The research leading to these results has received funding from the European Commission's Seventh Framework Programme [FP7/2007-2013], under grant agreement n° 312972 "Framework to integrate Space-based and in
Early hominin limb proportions.
Richmond, Brian G; Aiello, Leslie C; Wood, Bernard A
2002-10-01
Recent analyses and new fossil discoveries suggest that the evolution of hominin limb length proportions is complex, with evolutionary reversals and a decoupling of proportions within and between limbs. This study takes into account intraspecific variation to test whether or not the limb proportions of four early hominin associated skeletons (AL 288-1, OH 62, BOU-VP-12/1, and KNM-WT 15000) can be considered to be significantly different from one another. Exact randomization methods were used to compare the differences between pairs of fossil skeletons to the differences observed between all possible pairs of individuals within large samples of Gorilla gorilla, Pan troglodytes, Pongo pygmaeus, and Homo sapiens. Although the difference in humerofemoral proportions between OH 62 and AL 288-1 does not exceed variation in the extant samples, it is rare. When humerofemoral midshaft circumferences are compared, the difference between OH 62 and AL 288-1 is fairly common in extant species. This, in combination with error associated with the limb lengths estimates, suggests that it may be premature to consider H. (or Australopithecus) habilis as having more apelike limb proportions than those in A. afarensis. The humerofemoral index of BOU-VP-12/1 differs significantly from both OH 62 and AL 288-1, but not from KNM-WT 15000. Published length estimates, if correct, suggest that the relative forearm length of BOU-VP-12/1 is unique among hominins, exceeding those of the African apes and resembling the proportions in Pongo. Evidence that A. afarensis exhibited a less apelike upper:lower limb design than A. africanus (and possibly H. habilis) suggests that, if A. afarensis is broadly ancestral to A. africanus, the latter did not simply inherit primitive morphology associated with arboreality, but is derived in this regard. The fact that the limb proportions of OH 62 (and possibly KNM-ER 3735) are no more human like than those of AL 288-1 underscores the primitive body design of H
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Modelling in infectious diseases: between haphazard and hazard.
Neuberger, A; Paul, M; Nizar, A; Raoult, D
2013-11-01
Modelling of infectious diseases is difficult, if not impossible. No epidemic has ever been truly predicted, rather than being merely noticed when it was already ongoing. Modelling the future course of an epidemic is similarly tenuous, as exemplified by ominous predictions during the last influenza pandemic leading to exaggerated national responses. The continuous evolution of microorganisms, the introduction of new pathogens into the human population and the interactions of a specific pathogen with the environment, vectors, intermediate hosts, reservoir animals and other microorganisms are far too complex to be predictable. Our environment is changing at an unprecedented rate, and human-related factors, which are essential components of any epidemic prediction model, are difficult to foresee in our increasingly dynamic societies. Any epidemiological model is, by definition, an abstraction of the real world, and fundamental assumptions and simplifications are therefore required. Indicator-based surveillance methods and, more recently, Internet biosurveillance systems can detect and monitor outbreaks of infections more rapidly and accurately than ever before. As the interactions between microorganisms, humans and the environment are too numerous and unexpected to be accurately represented in a mathematical model, we argue that prediction and model-based management of epidemics in their early phase are quite unlikely to become the norm. PMID:23879334
NASA Astrophysics Data System (ADS)
Paprotny, Dominik; Morales Nápoles, Oswaldo
2016-04-01
Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.
Comparing the European (SHARE) and the reference Italian seismic hazard models
NASA Astrophysics Data System (ADS)
Visini, Francesco; Meletti, Carlo; D'Amico, Vera; Rovida, Andrea; Stucchi, Massimiliano
2016-04-01
A probabilistic seismic hazard evaluation for Europe has been recently released by the SHARE project (www.share-eu.org, Giardini et al., 2013; Woessner et al., 2015). A comparison between SHARE results for Italy and the official Italian seismic hazard model (MPS04, Stucchi et al., 2011), currently adopted by the building code, has been carried on to identify the main input elements that produce the differences between the two models. The SHARE model shows increased expected values (up to 70%) with respect to the MPS04 model for PGA with 10% probability of exceedance in 50 years. However, looking in detail at all output parameters of both the models, we observe that for spectral periods greater than 0.3 s, the reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This behaviour is mainly guided by the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to older GMPEs used in MPS04. Another important set of tests consisted in analyzing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only used area sources. Results show that, besides the strong impact of the GMPEs, the differences on the seismic hazard estimates among the three source models are relevant and, in particular, for some selected test sites, the fault-based model returns lowest estimates of seismic hazard. This result arises questions on the completeness of the fault database, their parameterization and assessment of activity rates as well as on the impact of the threshold magnitude between faults and background. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)
2001-01-01
Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated
Multiwire proportional chamber development
NASA Technical Reports Server (NTRS)
Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.
1973-01-01
The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.
Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.
2015-01-01
The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after
Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site
NASA Astrophysics Data System (ADS)
Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.
2012-04-01
The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.
1979-01-01
An Uhuru class Ar-CO2 gas filled proportional counter sealed with a 1.5 mil beryllium window and sensitive to X-rays in the energy bandwidth from 1.5 to 22 keV is presented. This device is coaligned with the X-ray telescope aboard the Einstein Observatory and takes data as a normal part of the Observatory operations.
Fitting additive hazards models for case-cohort studies: a multiple imputation approach.
Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook
2016-07-30
In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26194861
Modeling downwind hazards after an accidental release of chlorine trifluoride
Lombardi, D.A.; Cheng, Meng-Dawn
1996-05-01
A module simulating ClF{sub 3} chemical reactions with water vapor and thermodynamic processes in the atmosphere after an accidental release has been developed. This module was liked to the HGSYSTEM. Initial model runs simulate the rapid formation of HF and ClO{sub 2} after an atmospheric release of ClF{sub 3}. At distances beyond the first several meters from the release point, HF and ClO{sub 2} concentrations pose a greater threat to human health than do ClF{sub 3} concentrations. For most of the simulations, ClF{sub 3} concentrations rapidly fall below the IDLH. Fro releases occurring in ambient conditions with low relative humidity and/or ambient temperature, ClF{sub 3} concentrations exceed the IDLH up to almost 500 m. The performance of this model needs to be determined for potential release scenarios that will be considered. These release scenarios are currently being developed.
Large area application of a corn hazard model. [Soviet Union
NASA Technical Reports Server (NTRS)
Ashburn, P.; Taylor, T. W. (Principal Investigator)
1981-01-01
An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.
NASA Astrophysics Data System (ADS)
Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.
2015-12-01
Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.
Modelling clustering of natural hazard phenomena and the effect on re/insurance loss perspectives
NASA Astrophysics Data System (ADS)
Khare, S.; Bonazzi, A.; Mitas, C.; Jewson, S.
2015-06-01
In this paper, we present a conceptual framework for modelling clustered natural hazards that makes use of historical event data as a starting point. We review a methodology for modelling clustered natural hazard processes called Poisson mixtures. This methodology is suited to the application we have in mind as it naturally models processes that yield cross-event correlation (unlike homogeneous Poisson models), has a high degree of tunability to the problem at hand and is analytically tractable. Using European windstorm data as an example, we provide evidence that the historical data show strong evidence of clustering. We then develop Poisson and Clustered simulation models for the data, demonstrating clearly the superiority of the Clustered model which we have implemented using the Poisson mixture approach. We then discuss the implications of including clustering in models of prices of catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of unique insights into the impact clustering has on modelled catXL contract prices. The simple modelling example in this paper provides a clear and insightful starting point for practitioners tackling more complex natural hazard risk problems.
DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS
The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...
NASA Astrophysics Data System (ADS)
Gochis, E. E.; Lechner, H. N.; Brill, K. A.; Lerner, G.; Ramos, E.
2014-12-01
Graduate students at Michigan Technological University developed the "Landslides!" activity to engage middle & high school students participating in summer engineering programs in a hands-on exploration of geologic engineering and STEM (Science, Technology, Engineering and Math) principles. The inquiry-based lesson plan is aligned to Next Generation Science Standards and is appropriate for 6th-12th grade classrooms. During the activity students focus on the factors contributing to landslide development and engineering practices used to mitigate hazards of slope stability hazards. Students begin by comparing different soil types and by developing predictions of how sediment type may contribute to differences in slope stability. Working in groups, students then build tabletop hill-slope models from the various materials in order to engage in evidence-based reasoning and test their predictions by adding groundwater until each group's modeled slope fails. Lastly students elaborate on their understanding of landslides by designing 'engineering solutions' to mitigate the hazards observed in each model. Post-evaluations from students demonstrate that they enjoyed the hands-on nature of the activity and the application of engineering principles to mitigate a modeled natural hazard.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
A Time\\-Dependent Probabilistic Seismic Hazard Model For The Central Apennines (Italy)
NASA Astrophysics Data System (ADS)
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.
2004-12-01
Earthquake hazard in the Central Apennines, Italy has been investigated using time-independent probabilistic (simple Poissonian) and time-dependent probabilistic (renewal) models. We developed a hazard model that defines the sources for potential earthquakes and earthquake recurrence relations. Both characteristic and floating earthquake hypothesismodel is used for the Central Apennines faults (M>5.9). The models for each fault segment are developed based on recent geological and geophysical studies, as well as historical earthquakes. Historical seismicity, active faulting framework and inferred seismogenic behavior (expressed in terms of slip rates, recurrence intervals, elapsed times) constitute the main quantitative information used in the model assignment. We calculate the background hazard from Mw 4.6-5.9 earthquakes using the historical catalogs of CPTI04 (Working Group, 2004) and obtain a-value distribution over the study area. This is because the earthquakes occur in areas where they cannot be assigned to a particular fault. Therefore, their recurrence is considered by the historic occurrence of earthquakes, calculating the magnitude-frequency distributions. We found good agreement between expected earthquake rates from historical earthquake catalog and earthquake source model. The probabilities are obtained from time-dependent models characterized by a Brownian Passage Time function on recurrence interval with aperiodicity of 0.5. Earthquake hazard is quantified in terms of peak ground acceleration and spectral accelerations for natural periods of 0.2 and 1.0 seconds. The ground motions are determined for rock conditions. We have used the attenuation relationships obtained for the Apennines by Malagnini et al. (2000) together with the relationships predicted from Sabetta and Pugliese (1996) and Ambraseys et al. (1996) for the Italian and European regions, respectively. Generally, time dependent hazard is increased and the peaks appear to shift to the ESE
Francq, Bernard G; Cartiaux, Olivier
2016-09-10
Resecting bone tumors requires good cutting accuracy to reduce the occurrence of local recurrence. This issue is considerably reduced with a navigated technology. The estimation of extreme proportions is challenging especially with small or moderate sample sizes. When no success is observed, the commonly used binomial proportion confidence interval is not suitable while the rule of three provides a simple solution. Unfortunately, these approaches are unable to differentiate between different unobserved events. Different delta methods and bootstrap procedures are compared in univariate and linear mixed models with simulations and real data by assuming the normality. The delta method on the z-score and parametric bootstrap provide similar results but the delta method requires the estimation of the covariance matrix of the estimates. In mixed models, the observed Fisher information matrix with unbounded variance components should be preferred. The parametric bootstrap, easier to apply, outperforms the delta method for larger sample sizes but it may be time costly. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26990871
NASA Astrophysics Data System (ADS)
Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.
2014-12-01
The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
NASA Astrophysics Data System (ADS)
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Rainfall Hazards Prevention based on a Local Model Forecasting System
NASA Astrophysics Data System (ADS)
Buendia, F.; Ojeda, B.; Buendia Moya, G.; Tarquis, A. M.; Andina, D.
2009-04-01
Rainfall is one of the most important events of human life and society. Some rainfall phenomena like floods or hailstone are a threat to the agriculture, business and even life. However in the meteorological observatories there are methods to detect and alarm about this kind of events, nowadays the prediction techniques based on synoptic measurements need to be improved to achieve medium term feasible forecasts. Any deviation in the measurements or in the model description makes the forecast to diverge in time from the real atmosphere evolution. In this paper the advances in a local rainfall forecasting system based on time series estimation with General Regression Neural Networks are presented. The system is introduced, explaining the measurements, methodology and the current state of the development. The aim of the work is to provide a complementary criteria to the current forecast systems, based on the daily atmosphere observation and tracking over a certain place.
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Neotectonic deformation models for probabilistic seismic hazard: a study in the External Dinarides
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-06-01
In Europe, common input data types for seismic hazard evaluation include earthquake catalogues, seismic zonation models and ground motion models, all with well-constrained epistemic uncertainties. In contrast, neotectonic deformation models and their related uncertainties are rarely considered in earthquake forecasting and seismic hazard studies. In this study, for the first time in Europe, we developed a seismic hazard model based exclusively on active fault and geodynamic deformation models. We applied it to the External Dinarides, a slow-deforming fold-and-thrust belt in the Central Mediterranean. The two deformation models furnish consistent long-term earthquake rates above the Mw 4.7 threshold on a latitude/longitude grid with 0.2° spacing. Results suggest that the use of deformation models is a valid alternative to empirical-statistical approaches in earthquake forecasting in slow-deforming regions of Europe. Furthermore, we show that the variability of different deformation models has a comparable effect on the peak ground motion acceleration uncertainty as do the ground motion prediction equations.
How new fault data and models affect seismic hazard results? Examples from southeast Spain
NASA Astrophysics Data System (ADS)
Gaspar-Escribano, Jorge M.; Belén Benito, M.; Staller, Alejandra; Ruiz Barajas, Sandra; Quirós, Ligia E.
2016-04-01
In this work, we study the impact of different approaches to incorporate faults in a seismic hazard assessment analysis. Firstly, we consider two different methods to distribute the seismicity of the study area into faults and area-sources, based on magnitude partitioning and on moment rate distribution. We use two recurrence models to characterize fault activity: the characteristic earthquake model and the modified Gutenberg-Richter exponential frequency-magnitude distribution. An application of the work is developed in the region of Murcia (southeastern Spain), due to the availability of fault data and because is one of the areas in Spain with higher seismic hazard. The parameters used to model fault sources are derived from paleoseismological and field studies obtained from the literature and online repositories. Additionally, for some significant faults only, geodetically-derived slip rates are used to compute recurrence periods. The results of all the seismic hazard computations carried out using different models and data are represented in maps of expected peak ground accelerations for a return period of 475 years. Maps of coefficients of variation are presented to constraint the variability of the end-results to different input models and values. Additionally, the different hazard maps obtained in this study are compared with the seismic hazard maps obtained in previous work for the entire Spanish territory and more specifically for the region of Murcia. This work is developed in the context of the MERISUR project (ref. CGL2013-40492-R), with funding from the Spanish Ministry of Economy and Competitiveness.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.
2015-12-01
We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.
Proportional counter radiation camera
Borkowski, C.J.; Kopp, M.K.
1974-01-15
A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)
Mass movement hazard assessment model in the slope profile
NASA Astrophysics Data System (ADS)
Colangelo, A. C.
2003-04-01
The central aim of this work is to assess the spatial behaviour of critical depths for slope stability and the behaviour of their correlated variables in the soil-regolith transition along slope profiles over granite, migmatite and mica-schist parent materials in an humid tropical environment. In this way, we had making measures of shear strength for residual soils and regolith materials with soil "Cohron Sheargraph" apparatus and evaluated the shear stress tension behaviour at soil-regolith boundary along slope profiles, in each referred lithology. In the limit equilibrium approach applied here we adapt the infinite slope model for slope analysis in whole slope profile by means of finite element solution like in Fellenius or Bishop methods. In our case, we assume that the potential rupture surface occurs at soil-regolith or soil-rock boundary in slope material. For each slice, the factor of safety was calculated considering the value of shear strength (cohesion and friction) of material, soil-regolith boundary depth, soil moisture level content, slope gradient, top of subsurface flow gradient, apparent soil bulk density. The correlations showed the relative weight of cohesion, internal friction angle, apparent bulk density of soil materials and slope gradient variables with respect to the evaluation of critical depth behaviour for different simulated soil moisture content levels at slope profile scale. Some important results refer to the central role of behaviour of soil bulk-density variable along slope profile during soil evolution and in present day, because the intense clay production, mainly Kaolinite and Gibbsite at B and C-horizons, in the humid tropical environment. A increase in soil clay content produce a fall of friction angle and bulk density of material, specially when some montmorillonite or illite clay are present. We have observed too at threshold conditions, that a slight change in soil bulk-density value may disturb drastically the equilibrium of
Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards
Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J
2007-11-26
This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.
NASA Technical Reports Server (NTRS)
Wolpert, David
2004-01-01
Masked proportional routing is an improved procedure for choosing links between adjacent nodes of a network for the purpose of transporting an entity from a source node ("A") to a destination node ("B"). The entity could be, for example, a physical object to be shipped, in which case the nodes would represent waypoints and the links would represent roads or other paths between waypoints. For another example, the entity could be a message or packet of data to be transmitted from A to B, in which case the nodes could be computer-controlled switching stations and the links could be communication channels between the stations. In yet another example, an entity could represent a workpiece while links and nodes could represent, respectively, manufacturing processes and stages in the progress of the workpiece towards a finished product. More generally, the nodes could represent states of an entity and the links could represent allowed transitions of the entity. The purpose of masked proportional routing and of related prior routing procedures is to schedule transitions of entities from their initial states ("A") to their final states ("B") in such a manner as to minimize a cost or to attain some other measure of optimality or efficiency. Masked proportional routing follows a distributed (in the sense of decentralized) approach to probabilistically or deterministically choosing the links. It was developed to satisfy a need for a routing procedure that 1. Does not always choose the same link(s), even for two instances characterized by identical estimated values of associated cost functions; 2. Enables a graceful transition from one set of links to another set of links as the circumstances of operation of the network change over time; 3. Is preferably amenable to separate optimization of different portions of the network; 4. Is preferably usable in a network in which some of the routing decisions are made by one or more other procedure(s); 5. Preferably does not cause an
Schunior, A.; Zengel, A.E.; Mullenix, P.J.; Tarbell, N.J.; Howes, A.; Tassinari, M.S. )
1990-10-15
Many long term survivors of childhood acute lymphoblastic leukemia have short stature, as well as craniofacial and dental abnormalities, as side effects of central nervous system prophylactic therapy. An animal model is presented to assess these adverse effects on growth. Cranial irradiation (1000 cGy) with and without prednisolone (18 mg/kg i.p.) and methotrexate (2 mg/kg i.p.) was administered to 17- and 18-day-old Sprague-Dawley male and female rats. Animals were weighed 3 times/week. Final body weight and body length were measured at 150 days of age. Femur length and craniofacial dimensions were measured directly from the bones, using calipers. For all exposed groups there was a permanent suppression of weight gain with no catch-up growth or normal adolescent growth spurt. Body length was reduced for all treated groups, as were the ratios of body weight to body length and cranial length to body length. Animals subjected to cranial irradiation exhibited microcephaly, whereas those who received a combination of radiation and chemotherapy demonstrated altered craniofacial proportions in addition to microcephaly. Changes in growth patterns and skeletal proportions exhibited sexually dimorphic characteristics. The results indicate that cranial irradiation is a major factor in the growth failure in exposed rats, but chemotherapeutic agents contribute significantly to the outcome of growth and craniofacial dimensions.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food
Mura, Ivan; Malakar, Pradeep K.; Walshaw, John; Peck, Michael W.; Barker, G. C.
2015-01-01
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.
Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C
2016-01-01
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. PMID:23702378
A probabilistic tornado wind hazard model for the continental United States
Hossain, Q; Kimball, J; Mensing, R; Savy, J
1999-04-19
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.
Gated strip proportional detector
Morris, Christopher L.; Idzorek, George C.; Atencio, Leroy G.
1987-01-01
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10.sup.6. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
Gated strip proportional detector
Morris, C.L.; Idzorek, G.C.; Atencio, L.G.
1985-02-19
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10/sup 6/. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
Load proportional safety brake
NASA Technical Reports Server (NTRS)
Cacciola, M. J.
1979-01-01
This brake is a self-energizing mechanical friction brake and is intended for use in a rotary drive system. It incorporates a torque sensor which cuts power to the power unit on any overload condition. The brake is capable of driving against an opposing load or driving, paying-out, an aiding load in either direction of rotation. The brake also acts as a no-back device when torque is applied to the output shaft. The advantages of using this type of device are: (1) low frictional drag when driving; (2) smooth paying-out of an aiding load with no runaway danger; (3) energy absorption proportional to load; (4) no-back activates within a few degrees of output shaft rotation and resets automatically; and (5) built-in overload protection.
NASA Technical Reports Server (NTRS)
Wolpert, David H. (Inventor)
2003-01-01
Distributed approach for determining a path connecting adjacent network nodes, for probabilistically or deterministically transporting an entity, with entity characteristic mu from a source node to a destination node. Each node i is directly connected to an arbitrary number J(mu) of nodes, labeled or numbered j=jl, j2, .... jJ(mu). In a deterministic version, a J(mu)-component baseline proportion vector p(i;mu) is associated with node i. A J(mu)-component applied proportion vector p*(i;mu) is determined from p(i;mu) to preclude an entity visiting a node more than once. Third and fourth J(mu)-component vectors, with components iteratively determined by Target(i;n(mu);mu),=alpha(mu).Target(i;n(mu)-1;mu)j+beta(mu).p* (i;mu)j and Actual(i;n(mu);+a(mu)j. Actual(i;n(mu)-l;mu)j+beta(mu).Sent(i;j'(mu);n(mu)-1;mu)j, are computed, where n(mu) is an entity sequence index and alpha(mu) and beta(mu) are selected numbers. In one embodiment, at each node i, the node j=j'(mu) with the largest vector component difference, Target(i;n(mu);mu)j'- Actual (i;n(mu);mu)j'. is chosen for the next link for entity transport, except in special gap circumstances, where the same link is optionally used for transporting consecutively arriving entities. The network nodes may be computer-controlled routers that switch collections of packets, frames, cells or other information units. Alternatively, the nodes may be waypoints for movement of physical items in a network or for transformation of a physical item. The nodes may be states of an entity undergoing state transitions, where allowed transitions are specified by the network and/or the destination node.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically
Developments in EPA`s air dispersion modeling for hazardous/toxic releases
Touma, J.S.
1995-12-31
Title 3 of the 1990 Clean Air Act Amendments (CAAA) lists many chemicals as hazardous air pollutants and requires establishing regulations to prevent their accidental release, and to minimize the consequence, if any such releases occur. With the large number of potential release scenarios that are associated with these chemicals, there is a need for a systematic approach for applying air dispersion models to estimate impact. Because some chemicals may form dense gas clouds upon release, and dispersion models that can simulate these releases are complex, EPA has paid attention to the development of modeling tools and guidance on the use of models that can address these types of releases.
Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.
2014-01-01
Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a
NASA Astrophysics Data System (ADS)
Darmawan, Herlan; Wibowo, Totok; Suryanto, Wiwit; Setiawan, Muhammad
2014-05-01
Merapi eruption in 2010 was the tremendous phenomenon of natural disaster in Indonesia. The pyroclastic materials moved ~15 km from the summit of Merapi and destroyed many monitoring equipments. This emergency situation made the local government to evacuate more than 200.000 people who lived within 20 km distance from the summit of Merapi. The pyroclastic hazard map was not appropriate for this eruption scenario. It is because the map was just based on delineation of pyroclastic deposit in the previous eruption. Here, we purpose a method to predict the pyroclastic distribution in the future eruption based on mathematical approach. We used Titan2D software to produce the pyroclastic flow in 2010 eruption and the pyroclastic prediction after 2010 eruption. The method consists of parameterization, validation, and prediction. At least 39 models have been produced to obtain the best input parameters for 2010 eruption. Validation has been done by integrating between seismic refraction method and remote sensing interpretation. Seismic refraction method provides the information of pyroclastic deposit thickness, while remote sensing interpretation gives the information of pyroclastic distribution. The best model shows similarity with the reality. Analysis of bed friction parameter and build an eruption scenario are the most essential part in prediction. Analysis of bed friction was done by comparing the bed friction from 2006 eruption and 2010 eruption, while eruption scenario has been built by studying the historical eruption. The result shows that three villages are located in high pyroclastic hazardous area, six villages are located in moderate pyroclastic hazardous area, and three villages are located in low pyroclastic hazardous area. Those three villages are KepuhHarjo, GlagahHarjo, and Balerante. Therefore, the local government should concern for those three villages in the next eruption. This information can help the local government to make evacuation plan for the
Data Model for Multi Hazard Risk Assessment Spatial Support Decision System
NASA Astrophysics Data System (ADS)
Andrejchenko, Vera; Bakker, Wim; van Westen, Cees
2014-05-01
The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
Hazard Ranking System and toxicological risk assessment models yield different results
Dehghani, T.; Sells, G. . CER-CLA Site Assessment Div.)
1993-09-01
A major goal of the Superfund Site Assessment program is identifying hazardous waste sites that pose unacceptable risks to human health and the environment. To accomplish this, EPA developed the Hazard Ranking System (HRS), a mathematical model used to assess the relative risks associated with actual or potential releases of hazardous wastes from a site. HRS is a scoring system based on factors grouped into three categories--likelihood of release, waste characteristics and targets. Values for the factor categories are multiplied, then normalized to 100 points to obtain a pathway score. Four pathways--groundwater, surface water, air migration and soil exposure--are evaluated and scored. The final HRS score is obtained by combining pathway scores using a root-mean-square method. HRS is intended to be a screening tool for measuring relative, rather than absolute, risk. The Superfund site assessment program usually requires at least two studies of a potential hazardous waste site before it is proposed for listing on the NPL. The initial study, or preliminary assessment (PA), is a limited-scope evaluation based on available historical information and data that can be gathered readily during a site reconnaissance.
Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges
2016-04-01
The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model
Accelerated Hazards Model based on Parametric Families Generalized with Bernstein Polynomials
Chen, Yuhui; Hanson, Timothy; Zhang, Jiajia
2015-01-01
Summary A transformed Bernstein polynomial that is centered at standard parametric families, such as Weibull or log-logistic, is proposed for use in the accelerated hazards model. This class provides a convenient way towards creating a Bayesian non-parametric prior for smooth densities, blending the merits of parametric and non-parametric methods, that is amenable to standard estimation approaches. For example optimization methods in SAS or R can yield the posterior mode and asymptotic covariance matrix. This novel nonparametric prior is employed in the accelerated hazards model, which is further generalized to time-dependent covariates. The proposed approach fares considerably better than previous approaches in simulations; data on the effectiveness of biodegradable carmustine polymers on recurrent brain malignant gliomas is investigated. PMID:24261450
Using the Averaging-Based Factorization to Assess CyberShake Hazard Models
NASA Astrophysics Data System (ADS)
Wang, F.; Jordan, T. H.; Callaghan, S.; Graves, R. W.; Olsen, K. B.; Maechling, P. J.
2013-12-01
The CyberShake project of Southern California Earthquake Center (SCEC) combines stochastic models of finite-fault ruptures with 3D ground motion simulations to compute seismic hazards at low frequencies (< 0.5 Hz) in Southern California. The first CyberShake hazard model (Graves et al., 2011) was based on the Graves & Pitarka (2004) rupture model (GP-04) and the Kohler et al. (2004) community velocity model (CVM-S). We have recently extended the CyberShake calculations to include the Graves & Pitarka (2010) rupture model (GP-10), which substantially increases the rupture complexity relative to GP-04, and the Shaw et al. (2011) community velocity model (CVM-H), which features different sedimentary basin structures than CVM-S. Here we apply the averaging-based factorization (ABF) technique of Wang & Jordan (2013) to compare CyberShake models and assess their consistency with the hazards predicted by the Next Generation Attenuation (NGA) models (Power et al., 2008). ABF uses a hierarchical averaging scheme to separate the shaking intensities for large ensembles of earthquakes into relative (dimensionless) excitation fields representing site, path, directivity, and source-complexity effects, and it provides quantitative, map-based comparisons between models with completely different formulations. The CyberShake directivity effects are generally larger than predicted by the Spudich & Chiou (2008) NGA directivity factor, but those calculated from the GP-10 sources are smaller than those of GP-04, owing to the greater incoherence of the wavefields from the more complex rupture models. Substituting GP-10 for GP-04 reduces the CyberShake-NGA directivity-effect discrepancy by a factor of two, from +36% to +18%. The CyberShake basin effects are generally larger than those from the three NGA models that provide basin-effect factors. However, the basin excitations calculated from CVM-H are smaller than from CVM-S, and they show a stronger frequency dependence, primarily because
NASA Astrophysics Data System (ADS)
Hakimhashemi, Amir Hossein; Yoon, Jeoung Seok; Heidbach, Oliver; Zang, Arno; Grünthal, Gottfried
2014-07-01
The M w 3.2-induced seismic event in 2006 due to fluid injection at the Basel geothermal site in Switzerland was the starting point for an ongoing discussion in Europe on the potential risk of hydraulic stimulation in general. In particular, further development of mitigation strategies of induced seismic events of economic concern became a hot topic in geosciences and geoengineering. Here, we present a workflow to assess the hazard of induced seismicity in terms of occurrence rate of induced seismic events. The workflow is called Forward Induced Seismic Hazard Assessment (FISHA) as it combines the results of forward hydromechanical-numerical models with methods of time-dependent probabilistic seismic hazard assessment. To exemplify FISHA, we use simulations of four different fluid injection types with various injection parameters, i.e. injection rate, duration and style of injection. The hydromechanical-numerical model applied in this study represents a geothermal reservoir with preexisting fractures where a routine of viscous fluid flow in porous media is implemented from which flow and pressure driven failures of rock matrix and preexisting fractures are simulated, and corresponding seismic moment magnitudes are computed. The resulting synthetic catalogues of induced seismicity, including event location, occurrence time and magnitude, are used to calibrate the magnitude completeness M c and the parameters a and b of the frequency-magnitude relation. These are used to estimate the time-dependent occurrence rate of induced seismic events for each fluid injection scenario. In contrast to other mitigation strategies that rely on real-time data or already obtained catalogues, we can perform various synthetic experiments with the same initial conditions. Thus, the advantage of FISHA is that it can quantify hazard from numerical experiments and recommend a priori a stimulation type that lowers the occurrence rate of induced seismic events. The FISHA workflow is rather
A new approach for deriving Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Matgen, P.; Hostache, R.; Chini, M.; Giustarini, L.; Pappenberger, F.; Bally, P.
2014-12-01
With the flood consequences likely to amplify because of the growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method that integrates global flood inundation modeling and microwave remote sensing is presented. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers opportunities for estimating flood non-exceedance probabilities in a robust way. These probabilities can be attributed to historical satellite observations. Time series of SAR-derived flood extent maps and associated non-exceedance probabilities can then be combined generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. In principle, this can be done for any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. As a test case we applied the method on the Severn River (UK) and the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. The first results confirm the potentiality of the method. However, further developments on two aspects are required to improve the quality of the hazard map and to ensure the acceptability of the product by potential end user organizations. On the one hand, it is of paramount importance to
Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection
NASA Technical Reports Server (NTRS)
Messing, B. S.
1980-01-01
A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.
2015-12-01
Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk. PMID:19087232
NASA Astrophysics Data System (ADS)
Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark
2015-12-01
Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise
NASA Astrophysics Data System (ADS)
Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.
2014-12-01
A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.
Seismic source models for probabilistic hazard analysis of Georgia (Southern Caucasus)
NASA Astrophysics Data System (ADS)
Javakhishvili, Z.; Godoladze, T.; Gamkrelidze, E.; Sokhadze, G.
2014-12-01
Seismic Source model is one of the main components of probabilistic seismic-hazard analysis. Active faults and tectonics of Georgia (Sothern Caucasus) have been investigated in numerous scientific studies. The Caucasus consists of different geological structures with complex interactions. The major structures trend WNW-ESE, and focal mechanisms indicate primarily thrust faults striking parallel to the mountains. It is a part of the Alpine - Himalayan collision belt and it is well known for its high seismicity. Although the geodynamic activity of the region, caused by the convergence of the Arabian and the Eurasian plates at a rate of several cm/year, is well known, different tectonic models were proposed as an explanation for the seismic process in the region. The recent model on seismic sources for the Caucasus and derives from recent seismotectonic studies performed in Georgia in the framework of different international projects.We have analyzed previous studies and recent investigations on the bases of new seismic (spatial distribution, moment tensor solution etc), GPS and other data. As a result data base of seismic source models was compiled. Seismic sources are modeled as lines representing the surface projection of active faults or as wide areas (source zones), where the earthquakes can occur randomly. Each structure or zone was quantified on the basis of different parameters. Recent experience for harmonization of cross-border structures was used. As a result new seismic source model of Georgia (Southern Caucasus) for hazard analysis was created.
Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models
NASA Astrophysics Data System (ADS)
Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria
2011-08-01
The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.
NASA Technical Reports Server (NTRS)
Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.
1973-01-01
The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.
Nowcast model for hazardous material spill prevention and response, San Francisco Bay, California
Cheng, Ralph T.; Wilmot, Wayne L.; Galt, Jerry A.
1997-01-01
The National Oceanic and Atmospheric Administration (NOAA) installed the Physical Oceanographic Real-time System (PORTS) in San Francisco Bay, California, to provide real-time observations of tides, tidal currents, and meteorological conditions to, among other purposes, guide hazardous material spill prevention and response. Integrated with nowcast modeling techniques and dissemination of real-time data and the nowcasting results through the Internet on the World Wide Web, emerging technologies used in PORTS for real-time data collection forms a nowcast modeling system. Users can download tides and tidal current distribution in San Francisco Bay for their specific applications and/or for further analysis.
Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.
1994-07-01
This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
NASA Astrophysics Data System (ADS)
Chang, W.; Tsai, W.; Lin, F.; Lin, S.; Lien, H.; Chung, T.; Huang, L.; Lee, K.; Chang, C.
2008-12-01
During a typhoon or a heavy storm event, using various forecasting models to predict rainfall intensity, and water level variation in rivers and flood situation in the urban area is able to reveal its capability technically. However, in practice, the following two causes tend to restrain the further application of these models as a decision support system (DSS) for the hazard mitigation. The first one is due to the difficulty of integration of heterogeneous models. One has to take into consideration the different using format of models, such as input files, output files, computational requirements, and so on. The second one is that the development of DSS requires, due to the heterogeneity of models and systems, a friendly user interface or platform to hide the complexity of various tools from users. It is expected that users can be governmental officials rather than professional experts, therefore the complicated interface of DSS is not acceptable. Based on the above considerations, in the present study, we develop an open system for integration of several simulation models for flood forecasting by adopting the FEWS (Flood Early Warning System) platform developed by WL | Delft Hydraulics. It allows us to link heterogeneous models effectively and provides suitable display modules. In addition, FEWS also has been adopted by Water Resource Agency (WRA), Taiwan as the standard operational system for river flooding management. That means this work can be much easily integrated with the use of practical cases. In the present study, based on FEWS platform, the basin rainfall-runoff model, SOBEK channel-routing model, and estuary tide forecasting model are linked and integrated through the physical connection of model initial and boundary definitions. The work flow of the integrated processes of models is shown in Fig. 1. This differs from the typical single model linking used in FEWS, which only aims at data exchange but without much physical consideration. So it really
Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy
2015-07-01
In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building "learning cities" through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245
Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy
2015-01-01
In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245
Visual Manipulatives for Proportional Reasoning.
ERIC Educational Resources Information Center
Moore, Joyce L.; Schwartz, Daniel L.
The use of a visual representation in learning about proportional relations was studied, examining students' understandings of the invariance of a multiplicative relation on both sides of a proportion equation and the invariance of the structural relations that exist in different semantic types of proportion problems. Subjects were 49 high-ability…
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner. PMID:19697764
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
NASA Astrophysics Data System (ADS)
Sapunov, Valentin; Dikinis, Alexandr; Voronov, Nikolai
2014-05-01
Russian Federation having giant area has low concentration of land meteorological check points. Net of monitoring is not enough for effective forecast and prediction of weather dynamics and extremely situations. Under increase of extremely situations and incidents - hurricanes et al (two times from begin of XXI century) reconstruction and "perestroika" of monitoring net is needful and necessary. The basis of such a progress is distant monitoring using planes and satellites adding land contact monitoring base on efforts of existed points and stations. Interaction of contact and distant views may make hydro meteorological data and prediction more fine and significant. Tradition physical methods must be added by new biological methods of modern study. According to gotten researches animal are able to predict extremely hazards of natural and anthropogenic nature basing of interaction between biological matter and probable physical field that is under primary study. For example it was animals which forecasted dropping of Chelyabinsk meteorite of 2013. Adding of biological indication with complex of meteorological data may increase significance of hazard prediction. The uniting of all data and approaches may become basis of proposed mathematical hydro meteorological weather models. Introduction to practice reported complex methods may decrease of loss from hydro meteorological risks and hazards and increase stability of country economics.
Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function.
Zheng, Wenjing; Petersen, Maya; van der Laan, Mark J
2016-05-01
In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997. Marginal structural models Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1-10), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i. e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation Robins (1999. Marginal structural models versus structural nested models as tools for causal inference. In: Statistical models in epidemiology: the environment and clinical trials. Springer-Verlag, 1999:95-134), Robins et al. (2000), (IPTW, van der Laan and Petersen (2007. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat 2007;3:Article 3), Robins et al. (2008. Estimaton and extrapolation of optimal treatment and testing strategies. Statistics in Medicine 2008;27(23):4678-721)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood
SCEC/CME CyberShake: Probabilistic Seismic Hazard Analysis Using 3D Seismic Waveform Modeling
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Cui, Y.; Faerman, M.; Field, E.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T. H.; Kesselman, C.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2005-12-01
Researchers on the SCEC Community Modeling Environment (SCEC/CME) Project are calculating Probabilistic Seismic Hazard Curves for several sites in the Los Angeles area. The hazard curves calculated in this study use Intensity Measure Relationships (IMRs) based on 3D ground motion simulations rather than on attenuation relationships. State-of-the-art Probabilistic Seismic Hazard Analysis (PSHA) is currently conducted using IMRs that use empirically-based attenuation relationships. These attenuation relationships represent relatively simple analytical models based on the regression of observed data. However, it is widely believed that significant improvements in SHA will rely on the use of more physics-based, waveform modeling. In fact, a more physics-based approach to PSHA was endorsed in a recent assessment of earthquake science by National Research Council (2003). In order to introduce the use of 3D seismic waveform modeling into PSHA hazard curve calculations, the SCEC/CME CyberShake group is integrating state-of-the-art PSHA software tools (OpenSHA), SCEC-developed geophysical models (SCEC CVM3.0), validated anelastic wave modeling (AWM) software, and state-of-the-art computational technologies including high performance computing and grid-based scientific workflows in an effort to develop an OpenSHA-compatible 3D waveform-based IMR component. This will allow researchers to combine a new class of waveform-based IMRs with the large number of existing PSHA components, such as Earthquake Rupture Forecasts (ERF's), that are currently implemented in the OpenSHA system. To calculate a probabilistic hazard curve for a site of interest, we use the OpenSHA implementation of the NSHMP-2002 ERF and identify all ruptures within 200km of the site of interest. For each of these ruptures, we convert the NSHMP-2002 rupture definition into one, or more, Ruptures with Slip Time History (Rupture Variations) using newly developed Rupture Generator software. Strain Green Tensors are
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources. PMID:19070073
Gilbert, P. B.
2014-01-01
Summary In randomized placebo-controlled preventive HIV vaccine efficacy trials, an objective is to evaluate the relationship between vaccine efficacy to prevent infection and genetic distances of the exposing HIV strains to the multiple HIV sequences included in the vaccine construct, where the set of genetic distances is considered as the continuous multivariate ‘mark’ observed in infected subjects only. This research develops a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework for the assessment of mark-specific vaccine efficacy. It allows improved efficiency of estimation by employing the semiparametric method of maximum profile likelihood estimation in the vaccine-to-placebo mark density ratio model. The model also enables the use of a more efficient estimation method for the overall log hazard ratio in the Cox model. Additionally, we propose testing procedures to evaluate two relevant hypotheses concerning mark-specific vaccine efficacy. The asymptotic properties and finite-sample performance of the inferential procedures are investigated. Finally, we apply the proposed methods to data collected in the Thai RV144 HIV vaccine efficacy trial. PMID:23421613
Impact of Three-Parameter Weibull Models in Probabilistic Assessment of Earthquake Hazards
NASA Astrophysics Data System (ADS)
Pasari, Sumanta; Dikshit, Onkar
2014-07-01
This paper investigates the suitability of a three-parameter (scale, shape, and location) Weibull distribution in probabilistic assessment of earthquake hazards. The performance is also compared with two other popular models from same Weibull family, namely the two-parameter Weibull model and the inverse Weibull model. A complete and homogeneous earthquake catalog ( Yadav et al. in Pure Appl Geophys 167:1331-1342, 2010) of 20 events ( M ≥ 7.0), spanning the period 1846 to 1995 from north-east India and its surrounding region (20°-32°N and 87°-100°E), is used to perform this study. The model parameters are initially estimated from graphical plots and later confirmed from statistical estimations such as maximum likelihood estimation (MLE) and method of moments (MoM). The asymptotic variance-covariance matrix for the MLE estimated parameters is further calculated on the basis of the Fisher information matrix (FIM). The model suitability is appraised using different statistical goodness-of-fit tests. For the study area, the estimated conditional probability for an earthquake within a decade comes out to be very high (≥0.90) for an elapsed time of 18 years (i.e., 2013). The study also reveals that the use of location parameter provides more flexibility to the three-parameter Weibull model in comparison to the two-parameter Weibull model. Therefore, it is suggested that three-parameter Weibull model has high importance in empirical modeling of earthquake recurrence and seismic hazard assessment.
Web-based Services for Earth Observing and Model Data in National Applications and Hazards
NASA Astrophysics Data System (ADS)
Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.
2005-12-01
The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
NASA Astrophysics Data System (ADS)
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Sokolov, V. Y.
2013-12-01
Ground shaking due to recent catastrophic earthquakes are estimated to be significantly higher than that predicted by a probabilistic seismic hazard analysis (PSHA). A reason is that extreme (large magnitude and rare) seismic events are not accounted in PSHA in the most cases due to the lack of information and unknown reoccurrence time of the extremes. We present a new approach to assessment of regional seismic hazard, which incorporates observed (recorded and historic) seismicity and modeled extreme events. We apply this approach to PSHA of the Tibet-Himalayan region. The large magnitude events simulated for several thousand years in models of lithospheric block-and-fault dynamics and consistent with the regional geophysical and geodetic data are employed together with the observed earthquakes for the Monte-Carlo PSHA. Earthquake scenarios are generated stochastically to sample the magnitude and spatial distribution of seismicity (observed and modeled) as well as the distribution of ground motion for each seismic event. The peak ground acceleration (PGA) values (that is, ground shaking at a site), which are expected to be exceeded at least once in 50 years with a probability of 10%, are mapped and compared to those PGA values observed and predicted earlier. The results show that the PGA values predicted by our assessment fit much better the observed ground shaking due to the 2008 Wenchuan earthquake than those predicted by conventional PSHA. Our approach to seismic hazard assessment provides a better understanding of ground shaking due to possible large-magnitude events and could be useful for risk assessment, earthquake engineering purposes, and emergency planning.
Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data
TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal
2016-01-01
Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Brune, J.; Brune, R.; Ludwig, L. Grant; Kendrick, K.; Purvance, M.; Saleeby, I.
2012-04-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use this method to calculate new exposure ages and fragilities for 6 PBRs in southern California (USA) near the San Andreas, San Jacinto, and Elsinore faults at the Lovejoy Buttes, Round Top, Pacifico, Beaumont South, Perris, and Benton Road sites (in addition to the recently published age of 18.7 +/- 2.8 ka for a PBR at the Grass Valley site). We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data.
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Biasi, G. P.; Brune, J. N.; Brune, R.; Grant Ludwig, L.; Kendrick, K. J.; Purvance, M.; Saleeby, I.
2012-12-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional shape model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use our published method to calculate new exposure ages for PBRs at 6 sites in southern California near the San Andreas, San Jacinto, and Elsinore faults, including: Lovejoy Buttes (9 +/- 1 ka), Round Top (35 +/- 1 ka), Pacifico (19 +/- 1 ka, but with a poor fit to data), Beaumont South (17 +/- 2 ka), Perris (24 +/- 2 ka), and Benton Road (40 +/- 1 ka), in addition to the recently published age of 18.5 +/- 2.0 ka for a PBR at the Grass Valley site. We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data. Precariously balanced rock in southern California
Atmospheric Electrical Modeling in Support of the NASA F-106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.
1988-01-01
A recently developed storm electrification model (SEM) is used to investigate the operating environment of the F-106 airplane during the NASA Storm Hazards Project. The model is 2-D, time dependent and uses a bulkwater microphysical parameterization scheme. Electric charges and fields are included, and the model is fully coupled dynamically, microphysically and electrically. One flight showed that a high electric field was developed at the aircraft's operating altitude (28 kft) and that a strong electric field would also be found below 20 kft; however, this low-altitude, high-field region was associated with the presence of small hail, posing a hazard to the aircraft. An operational procedure to increase the frequency of low-altitude lightning strikes was suggested. To further the understanding of lightning within the cloud environment, a parameterization of the lightning process was included in the SEM. It accounted for the initiation, propagation, termination, and charge redistribution associated with an intracloud discharge. Finally, a randomized lightning propagation scheme was developed, and the effects of cloud particles on the initiation of lightning investigated.
Podlaski, Rafał; Roesch, Francis A
2014-03-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components, age cohorts and dominant species, and to assess the significance of differences between the mixture distributions and the kernel density estimates. The data consisted of plots from the Świętokrzyski National Park (Central Poland) and areas close to and including the North Carolina section of the Great Smoky Mountains National Park (USA; southern Appalachians). The fit of the mixture Weibull model to empirical DBH distributions had a precision similar to that of the mixture gamma model, slightly less accurate estimate was obtained with the kernel density estimator. Generally, in the two-cohort, two-storied, multi-species stands in the southern Appalachians, the two-component DBH structure was associated with age cohort and dominant species. The 1st DBH component of the mixture model was associated with the 1st dominant species sp1 occurred in young age cohort (e.g., sweetgum, eastern hemlock); and to a lesser degree, the 2nd DBH component was associated with the 2nd dominant species sp2 occurred in old age cohort (e.g., loblolly pine, red maple). In two-cohort, partly multilayered, stands in the Świętokrzyski National Park, the DBH structure was usually associated with only age cohorts (two dominant species often occurred in both young and old age cohorts). When empirical DBH distributions representing stands of complex structure are approximated using mixture models, the convergence of the estimation process is often significantly dependent on the starting strategies. Depending on the number of DBHs measured, three methods for choosing the initial values are recommended: min.k/max.k, 0.5/1.5/mean
Topography-based modeling of large rockfalls and application to hazard assessment
NASA Astrophysics Data System (ADS)
Hergarten, S.
2012-07-01
Rockfalls are among the most important natural hazards in mountainous regions. Similarly to earthquakes and wildfires, their sizes follow a power-law distribution covering an enormous range of sizes. In this paper, the presumably first modeling approach that explains this power-law distribution quantitatively is presented. Applied to the European Alps, the Himalayas and the Rocky Mountains, the model suggests that a power-law exponent of 1.35 with respect to the detached volume is a universal property of rockfalls. Beyond reproducing and explaining existing statistical data, the model allows an estimate on size and frequency of the largest possible rockfalls in a region, which cannot be derived from available rockfall inventories so far.
A statistical model for seismic hazard assessment of hydraulic-fracturing-induced seismicity
NASA Astrophysics Data System (ADS)
Hajati, T.; Langenbruch, C.; Shapiro, S. A.
2015-12-01
We analyze the interevent time distribution of hydraulic-fracturing-induced seismicity collected during 18 stages at four different regions. We identify a universal statistical process describing the distribution of hydraulic-fracturing-induced events in time. The distribution of waiting times between subsequently occurring events is given by the exponential probability density function of the homogeneous Poisson process. Our findings suggest that hydraulic-fracturing-induced seismicity is directly triggered by the relaxation of stress and pore pressure perturbation initially created by the injection. Therefore, compared to this relaxation, the stress transfer caused by the occurrence of preceding seismic events is mainly insignificant for the seismogenesis of subsequently occurring events. We develop a statistical model to compute the occurrence probability of hydraulic-fracturing-induced seismicity. This model can be used to assess the seismic hazard associated with hydraulic fracturing operations. No aftershock triggering has to be included in the statistical model.
Dukić, Vanja; Dignam, James
2011-01-01
The multiresolution estimator, developed originally in engineering applications as a wavelet-based method for density estimation, has been recently extended and adapted for estimation of hazard functions (Bouman et al. 2005, 2007). Using the multiresolution hazard (MRH) estimator in the Bayesian framework, we are able to incorporate any a priori desired shape and amount of smoothness in the hazard function. The MRH method’s main appeal is in its relatively simple estimation and inference procedures, making it possible to obtain simultaneous confidence bands on the hazard function over the entire time span of interest. Moreover, these confidence bands properly reflect the multiple sources of uncertainty, such as multiple centers or heterogeneity in the patient population. Also, rather than the commonly employed approach of estimating covariate effects and the hazard function separately, the Bayesian MRH method estimates all of these parameters jointly, thus resulting in properly adjusted inference about any of the quantities. In this paper, we extend the previously proposed MRH methods (Bouman et al. 2005, 2007) into the hierarchical multiresolution hazard setting (HMRH), to accommodate the case of separate hazard rate functions within each of several strata as well as some common covariate effects across all strata while accounting for within-stratum correlation. We apply this method to examine patterns of tumor recurrence after treatment for early stage breast cancer, using data from two large-scale randomized clinical trials that have substantially influenced breast cancer treatment standards. We implement the proposed model to estimate the recurrence hazard and explore how the shape differs between patients grouped by a key tumor characteristic (estrogen receptor status) and treatment types, after adjusting for other important patient characteristics such as age, tumor size and progesterone level. We also comment on whether the hazards exhibit nonmonotonic
"Developing a multi hazard air quality forecasting model for Santiago, Chile"
NASA Astrophysics Data System (ADS)
Mena, M. A.; Delgado, R.; Hernandez, R.; Saide, P. E.; Cienfuegos, R.; Pinochet, J. I.; Molina, L. T.; Carmichael, G. R.
2013-05-01
Santiago, Chile has reduced annual particulate matter from 69ug/m3 (in 1989) to 25ug/m3 (in 2012), mostly by forcing industry, the transport sector, and the residential heating sector to adopt stringent emission standards to be able to operate under bad air days. Statistical forecasting has been used to predict bad air days, and pollution control measures in Santiago, Chile, for almost two decades. Recently an operational PM2.5 deterministic model has been implemented using WRF-Chem. The model was developed by the University of Iowa and is run at the Chilean Meteorological Office. Model configuration includes high resolution emissions gridding (2km) and updated population distribution using 2008 data from LANDSCAN. The model is run using a 2 day spinup with a 5 day forecast. This model has allowed a preventive approach in pollution control measures, as episodes are the results of multiple days of bad dispersion. Decreeing air pollution control measures in advance of bad air days resulted in a reduction of 40% of alert days (80ug/m3 mean 24h PM2.5) and 66% of "preemergency days" (110ug/m3 mean 24h PM2.5) from 2011 to 2012, despite similar meteorological conditions. This model will be deployed under a recently funded Center for Natural Disaster Management, and include other meteorological hazards such as flooding, high temperature, storm waves, landslides, UV radiation, among other parameters. This paper will present the results of operational air quality forecasting, and the methodology that will be used to transform WRF-Chem into a multi hazard forecasting system.
A "mental models" approach to the communication of subsurface hydrology and hazards
NASA Astrophysics Data System (ADS)
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Lee, Saro; Shattri, Mansor
This paper deals with landslide hazard analysis and cross-application using Geographic Information System (GIS) and remote sensing data for Cameron Highland, Penang Island and Selangor in Malaysia. The aim of this study was to cross-apply and verify a spatial probabilistic model for landslide hazard analysis. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical/geological data and satellite images were collected and processed using GIS and image processing tools. There are ten landslide inducing parameters which are considered for the landslide hazard analysis. These parameters are topographic slope, aspect, curvature and distance from drainage, all derived from the topographic database; geology and distance from lineament, derived from the geologic database; landuse from Landsat satellite images; soil from the soil database; precipitation amount, derived from the rainfall database; and the vegetation index value from SPOT satellite images. These factors were analyzed using an artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide hazard maps were drawn for these three areas using artificial neural network model derived not only from the data for that area but also using the weight for each parameters, one of the statistical model, calculated from each of the other two areas (nine maps in all) as a cross-check of the validity of the method. For verification, the results of the analyses were compared, in each study area, with actual landslide locations. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.
Socio-economic vulnerability to natural hazards - proposal for an indicator-based model
NASA Astrophysics Data System (ADS)
Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.
2012-04-01
Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually
Suzette Payne
2007-08-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
[Protection of workers' health against occupational psycho-social hazards--theoretical models].
Dudek, B; Waszkowska, M
1996-01-01
Occupational Health Service are not yet equipped with tools which could permit them to include protection of workers' health against occupational psycho-social hazards into their prophylactic activities. The authors present a model of such a system, its objectives and conditions which should be satisfied in order to put the system into operation. The model discussed is somewhat an ideal solution which does not necessarily adhere to the reality but it sets tasks and identifies lines of activities to be carried out at the Department of Occupational Psychology, the Nofer Institute of Occupational Medicine, Lodz. These activities are aimed at monitoring and evaluating of health risk generated by psycho-social factors. PMID:8656994
Suzette Payne
2006-04-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Multiple Ways to Solve Proportions
ERIC Educational Resources Information Center
Ercole, Leslie K.; Frantz, Marny; Ashline, George
2011-01-01
When solving problems involving proportions, students may intuitively draw on strategies that connect to their understanding of fractions, decimals, and percents. These two statements--"Instruction in solving proportions should include methods that have a strong intuitive basis" and "Teachers should begin instruction with more intuitive…
A seismic source zone model for the seismic hazard assessment of the Italian territory
NASA Astrophysics Data System (ADS)
Meletti, Carlo; Galadini, Fabrizio; Valensise, Gianluca; Stucchi, Massimiliano; Basili, Roberto; Barba, Salvatore; Vannucci, Gianfranco; Boschi, Enzo
2008-04-01
We designed a new seismic source model for Italy to be used as an input for country-wide probabilistic seismic hazard assessment (PSHA) in the frame of the compilation of a new national reference map. We started off by reviewing existing models available for Italy and for other European countries, then discussed the main open issues in the current practice of seismogenic zoning. The new model, termed ZS9, is largely based on data collected in the past 10 years, including historical earthquakes and instrumental seismicity, active faults and their seismogenic potential, and seismotectonic evidence from recent earthquakes. This information allowed us to propose new interpretations for poorly understood areas where the new data are in conflict with assumptions made in designing the previous and widely used model ZS4. ZS9 is made out of 36 zones where earthquakes with Mw > = 5 are expected. It also assumes that earthquakes with Mw up to 5 may occur anywhere outside the seismogenic zones, although the associated probability is rather low. Special care was taken to ensure that each zone sampled a large enough number of earthquakes so that we could compute reliable earthquake production rates. Although it was drawn following criteria that are standard practice in PSHA, ZS9 is also innovative in that every zone is characterised also by its mean seismogenic depth (the depth of the crustal volume that will presumably release future earthquakes) and predominant focal mechanism (their most likely rupture mechanism). These properties were determined using instrumental data, and only in a limited number of cases we resorted to geologic constraints and expert judgment to cope with lack of data or conflicting indications. These attributes allow ZS9 to be used with more accurate regionalized depth-dependent attenuation relations, and are ultimately expected to increase significantly the reliability of seismic hazard estimates.
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-05-01
El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences - finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high resolution
Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain
NASA Astrophysics Data System (ADS)
Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.
2015-04-01
were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
On tsunami inundation modeling for hazard estimation at three coastal areas in Indonesia
NASA Astrophysics Data System (ADS)
Leschka, Stefan; Zosseder, Kai; Post, Joachim; Larsen, Ole
2010-05-01
Numerical modeling deals with physical phenomena, from which a model can only consider a selection. A specific event as well as the description of the terrain has to be represented adequately by model input data. Furthermore, model results are averaged quantities, which are supposed to be representative for an area. They have to be interpreted with respect to the used data and model simplifications and compared to physical data, leading to a validation of the model, before they can be used for their purpose. In tsunami modeling, the application of these steps is very difficult because of the availability of required data. Furthermore, some phenomena, e.g. a tsunamigenic earthquake and inundation, are still not adequately understood. These points lead to an uncertainty, which comes along with every inundation result. Especially when modeling results are supposed to be applied in hazard assessment a critical discussion on the uncertainties is required. In this study, a non-linear shallow water model with finite volume discretization has been used to calculate wave propagation from the source region to the shoreline and inundation. Terrain roughness has been implemented using the quadratic friction law. Considering experiences from onsite surveys in the areas of Cilacap, Kuta and Padang (Indonesia), sensitivity tests have been done varying bathymetry data and Manning values. One hypothetical tsunamigenic earthquake has been applied to all areas, using similar source parameters and distances between the epicenter and the particular area. The Manning values have been generated on the basis of land use data, considering energy losses due to drag and inertia. The results have been analyzed with respect to flow depths and flow velocities onshore. In all three areas, inundation depths show a small sensitivity against Manning values, onshore flow velocities show high sensitivity. The fluxes have been determined from flow depths and velocities and classified using stability
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
NASA Astrophysics Data System (ADS)
Hayes, P.; Trigg, J. L.; Stauffer, D.; Hunter, G.; McQueen, J.
2006-05-01
Consequence assessment (CA) operations are those processes that attempt to mitigate negative impacts of incidents involving hazardous materials such as chemical, biological, radiological, nuclear, and high explosive (CBRNE) agents, facilities, weapons, or transportation. Incident types range from accidental spillage of chemicals at/en route to/from a manufacturing plant, to the deliberate use of radiological or chemical material as a weapon in a crowded city. The impacts of these incidents are highly variable, from little or no impact to catastrophic loss of life and property. Local and regional scale atmospheric conditions strongly influence atmospheric transport and dispersion processes in the boundary layer, and the extent and scope of the spread of dangerous materials in the lower levels of the atmosphere. Therefore, CA personnel charged with managing the consequences of CBRNE incidents must have detailed knowledge of current and future weather conditions to accurately model potential effects. A meteorology team was established at the U.S. Defense Threat Reduction Agency (DTRA) to provide weather support to CA personnel operating DTRA's CA tools, such as the Hazard Prediction and Assessment Capability (HPAC) tool. The meteorology team performs three main functions: 1) regular provision of meteorological data for use by personnel using HPAC, 2) determination of the best performing medium-range model forecast for the 12 - 48 hour timeframe and 3) provision of real-time help-desk support to users regarding acquisition and use of weather in HPAC CA applications. The normal meteorology team operations were expanded during a recent modeling project which took place during the 2006 Winter Olympic Games. The meteorology team took advantage of special weather observation datasets available in the domain of the Winter Olympic venues and undertook a project to improve weather modeling at high resolution. The varied and complex terrain provided a special challenge to the
NASA Technical Reports Server (NTRS)
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
NASA Astrophysics Data System (ADS)
Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.
2010-05-01
Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.
NASA Astrophysics Data System (ADS)
Malet, Jean-Philippe; Remaître, Alexandre
2015-04-01
Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance
Numerical modelling for real-time forecasting of marine oil pollution and hazard assessment
NASA Astrophysics Data System (ADS)
De Dominicis, Michela; Pinardi, Nadia; Bruciaferri, Diego; Liubartseva, Svitlana
2015-04-01
(MEDESS4MS) system, which is an integrated operational multi-model oil spill prediction service, that can be used by different users to run simulations of oil spills at sea, even in real time, through a web portal. The MEDESS4MS system gathers different oil spill modelling systems and data from meteorological and ocean forecasting systems, as well as operational information on response equipment, together with environmental and socio-economic sensitivity maps. MEDSLIK-II has been also used to provide an assessment of hazard stemming from operational oil ship discharges in the Southern Adriatic and Northern Ionian (SANI) Seas. Operational pollution resulting from ships consists of a movable hazard with a magnitude that changes dynamically as a result of a number of external parameters varying in space and time (temperature, wind, sea currents). Simulations of oil releases have been performed with realistic oceanographic currents and the results show that the oil pollution hazard distribution has an inherent spatial and temporal variability related to the specific flow field variability.
van der Sloot, H A; Kosson, D S
2012-03-15
The evaluation of the hazardous nature of a waste is frequently based on total composition in many jurisdictions, while for most cases the chemical form of the constituents and the release pathways that may result in exposure of man and organisms under conditions of handling, transport, disposal or beneficial use are the most important factors controlling potential environmental impact. Thus, leaching assessment related to possible management scenarios rather than total content can provide a much more robust basis for evaluating health risks and environmental risks for waterborne pathways. Standardized characterisation leaching tests based on intrinsic characteristics of a material provide a new foundation for needed decisions. Chemical speciation modelling using characterisation testing results provides a means to identify mechanisms controlling constituent release, including mineral or sorptive phases, and thus insights into the long-term release behaviour of the material and approaches to reducing potential impacts. PMID:21531504
A hazards-model analysis of the covariates of infant and child mortality in Sri Lanka.
Trussell, J; Hammerslough, C
1983-02-01
The purpose of this paper is twofold: (a) to provide a complete self-contained exposition of estimating life tables with covariates through the use of hazards models, and (b) to illustrate this technique with a substantive analysis of child mortality in Sri Lanka, thereby demonstrating that World Fertility Survey data are a valuable source for the study of child mortality. We show that life tables with covariates can be easily estimated with standard computer packages designed for analysis of contingency tables. The substantive analysis confirms and supplements an earlier study of infant and child mortality in Sri Lanka by Meegama. Those factors found to be strongly associated with mortality are mother's and father's education, time period of birth, urban/rural/estate residence, ethnicity, sex, birth order, age of the mother at the birth, and type of toilet facility. PMID:6832431
NASA Astrophysics Data System (ADS)
Patra, A. K.; Connor, C.; Webley, P.; Jones, M.; Charbonnier, S. J.; Connor, L.; Gallo, S.; Bursik, M. I.; Valentine, G.; Hughes, C. G.; Aghakhani, H.; Renschler, C. S.; Kosar, T.
2014-12-01
We report here on an effort to improve the sustainability, robustness and usability of the core modeling and simulation tools housed in the collaboratory VHub.org and used in the study of complex volcanic behavior. In particular, we focus on tools that support large scale mass flows (TITAN2D), ash deposition/transport and dispersal (Tephra2 and PUFF), and lava flows (Lava2). These tools have become very popular in the community especially due to the availability of an online usage modality. The redevelopment of the tools ot take advantage of new hardware and software advances was a primary thrust for the effort. However, as we start work we have reoriented the effort to also take advantage of significant new opportunities for supporting the complex workflows and use of distributed data resources that will enable effective and efficient hazard analysis.
Risk assessment framework of fate and transport models applied to hazardous waste sites
Hwang, S.T.
1993-06-01
Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary.
Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation
NASA Astrophysics Data System (ADS)
Borga, M.; Creutin, J. D.
Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two
NASA Astrophysics Data System (ADS)
Allen, S. K.; Schneider, D.; Owens, I. F.
2009-03-01
Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.
Boissonnade, A; Hossain, Q; Kimball, J
2000-07-20
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States.
Development of algal interspecies correlation estimation models for chemical hazard assessment.
Brill, Jessica L; Belanger, Scott E; Chaney, Joel G; Dyer, Scott D; Raimondo, Sandy; Barron, Mace G; Pittinger, Charles A
2016-09-01
Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potentially filling in data gaps for a variety of environmental assessment purposes. Web-ICE has historically been dominated by aquatic and terrestrial animal prediction models. Web-ICE models for algal species were essentially absent and are addressed in the present study. A compilation of public and private sector-held algal toxicity data were compiled and reviewed for quality based on relevant aspects of individual studies. Interspecies correlations were constructed from the most commonly tested algal genera for a broad spectrum of chemicals. The ICE regressions were developed based on acute 72-h and 96-h endpoint values involving 1647 unique studies on 476 unique chemicals encompassing 40 genera and 70 species of green, blue-green, and diatom algae. Acceptance criteria for algal ICE models were established prior to evaluation of individual models and included a minimum sample size of 3, a statistically significant regression slope, and a slope estimation parameter ≥0.65. A total of 186 ICE models were possible at the genus level, with 21 meeting quality criteria; and 264 ICE models were developed at the species level, with 32 meeting quality criteria. Algal ICE models will have broad utility in screening environmental hazard assessments, data gap filling in certain regulatory scenarios, and as supplemental information to derive species sensitivity distributions. Environ Toxicol Chem 2016;35:2368-2378. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. PMID:26792236
A comparative analysis of hazard models for predicting debris flows in Madison County, VA
Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.
2001-01-01
During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).
NASA Astrophysics Data System (ADS)
Mishra, Kirti Bhushan
2015-09-01
A volumetric source based CFD (Computational Fluid Dynamics) model for estimating the wind and gravity driven spread of an elevated released dense hazardous cloud on a flat terrain without and with obstacles is demonstrated. The model considers the development of a worst-case scenario similar to that occurred at Bhopal. Fully developed clouds of a dense gas having different densities, under ABL (Atmospheric Boundary Layer) with calm ground wind conditions are first obtained. These clouds are then allowed to spread under ABL with different ground wind speeds and gravity conditions. The developed model is validated by performing the grid independent study, the fluid dynamical evidences, post-disaster facts, the downwind MIC (Methyl Isocynate) concentrations estimated by earlier models and experiments on dense plume trajectories. It is shown that in case of an active dispersion under calm wind conditions the lateral spread would prevail over the downwind spread. The presence of a dense medium behaves like a weak porous media and initiates turbulence at much smaller downwind distances than that normally would occur without the dense medium. The safety distances from toxic exposures of MIC are predicted by specifying an isosurface of a minimum concentration above the ground surface. Discrepancies in near-field predictions still exist. However, the far-field predictions agree well with data published before.
Lava flow hazard modeling during the 2014-2015 Fogo eruption, Cape Verde
NASA Astrophysics Data System (ADS)
Cappello, Annalisa; Ganci, Gaetana; Calvari, Sonia; Pérez, Nemesio M.; Hernández, Pedro A.; Silva, Sónia V.; Cabral, Jeremias; Del Negro, Ciro
2016-04-01
Satellite remote sensing techniques and lava flow forecasting models have been combined to enable a rapid response during effusive crises at poorly monitored volcanoes. Here we used the HOTSAT satellite thermal monitoring system and the MAGFLOW lava flow emplacement model to forecast lava flow hazards during the 2014-2015 Fogo eruption. In many ways this was one of the major effusive eruption crises of recent years, since the lava flows actually invaded populated areas. Combining satellite data and modeling allowed mapping of the probable evolution of lava flow fields while the eruption was ongoing and rapidly gaining as much relevant information as possible. HOTSAT was used to promptly analyze MODIS and SEVIRI data to output hot spot location, lava thermal flux, and effusion rate estimation. This output was used to drive the MAGFLOW simulations of lava flow paths and to continuously update flow simulations. We also show how Landsat 8 OLI and EO-1 ALI images complement the field observations for tracking the flow front position through time and adding considerable data on lava flow advancement to validate the results of numerical simulations. The integration of satellite data and modeling offers great promise in providing a unified and efficient system for global assessment and real-time response to effusive eruptions, including (i) the current state of the effusive activity, (ii) the probable evolution of the lava flow field, and (iii) the potential impact of lava flows.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including
A global vegetation corrected SRTM DEM for use in hazard modelling
NASA Astrophysics Data System (ADS)
Bates, P. D.; O'Loughlin, F.; Neal, J. C.; Durand, M. T.; Alsdorf, D. E.; Paiva, R. C. D.
2015-12-01
We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hazard modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hazard modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. Our new 'Bare-Earth' SRTM DEM combines multiple remote sensing datasets, including ICESat GLA14 ground elevations, the vegetation continuous field dataset as a proxy for penetration depth of SRTM and a global vegetation height map, to remove the vegetation artefacts present in the original SRTM DEM. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As
Mathematical models and methods of risk assessment in ecologically hazardous industries
Mikhalevich, V.S.; Knopov, P.S.; Golodnikov, A.N.
1994-11-01
Analysis of critical industrial situations leading to accidents or catastrophes has shown that the main factors responsible for accidents include technological inadequacy of ecologically hazardous facilities, equipment design errors, and insufficient preventive maintenance of facilities with an enhanced level of environmental hazard. The scale of the accident after-effects essentially depends on the location of the ecologically hazardous facility, timely development of preventive measures, and prompt implementations of these measures in emergency in compliance with strict deadlines for decision making.
NASA Astrophysics Data System (ADS)
Zhong, Q.; Shi, B.; Meng, L.
2010-12-01
The North China is one of the most seismically active regions in the mainland China. The moderate to large earthquakes have occurred here throughout history, resulting in huge losses of human life and properties. With the probabilistic seismic hazard analysis (PSHA) approach, we investigate the influence of different seismic environments, incorporating both near surface soil properties and distributed historical and modern seismicity. A simplified seismic source model, derived with the consideration of regional active fault distributions, is presented for the North China region. The spatial distributed seismicity model of PSHA is used to calculate the level of ground motion likely to be exceeded in a given time period. Following Frankel (1995) approach of circular Gaussian smoothing procedure, in the PSHA’s calculation, we proposed the fault-rupture-oriented elliptical Gaussian smoothing with the assumptions that earthquakes occur on faults or fault zones of past earthquakes to delineate the potential seismic zones (Lapajine et al., 2003). This is combined with regional active fault strike directions and the seismicity distribution patterns. Next Generation Attenuation model ((NGA), Boore et al., 2007) is used in generating hazard map for PGA with 2%, 5%, and 10 % probability of being exceeded in 50 years, and the resultant hazard map is compared with the result given by Global Seismic Hazard Assessment Project (GSHAP). There is general agreement for PGA distribution patterns between the results of this study and the GSHAP map that used the same seismic source zones. However, peak ground accelerations predicted in this study are typically 10-20% less than those of the GSHAP, and the seismic source models, such as fault distributions and regional seismicity used in the GSHAP seem to be oversimplified. We believe this study represents an improvement on prior seismic hazard evaluations for the region. In addition to the updated input data, we believe that, by
AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.
NASA Astrophysics Data System (ADS)
Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie
2015-04-01
Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.
Simulation of the 1992 Tessina landslide by a cellular automata model and future hazard scenarios
NASA Astrophysics Data System (ADS)
Avolio, MV; Di Gregorio, Salvatore; Mantovani, Franco; Pasuto, Alessandro; Rongo, Rocco; Silvano, Sandro; Spataro, William
Cellular Automata are a powerful tool for modelling natural and artificial systems, which can be described in terms of local interactions of their constituent parts. Some types of landslides, such as debris/mud flows, match these requirements. The 1992 Tessina landslide has characteristics (slow mud flows) which make it appropriate for modelling by means of Cellular Automata, except for the initial phase of detachment, which is caused by a rotational movement that has no effect on the mud flow path. This paper presents the Cellular Automata approach for modelling slow mud/debris flows, the results of simulation of the 1992 Tessina landslide and future hazard scenarios based on the volumes of masses that could be mobilised in the future. They were obtained by adapting the Cellular Automata Model called SCIDDICA, which has been validated for very fast landslides. SCIDDICA was applied by modifying the general model to the peculiarities of the Tessina landslide. The simulations obtained by this initial model were satisfactory for forecasting the surface covered by mud. Calibration of the model, which was obtained from simulation of the 1992 event, was used for forecasting flow expansion during possible future reactivation. For this purpose two simulations concerning the collapse of about 1 million m 3 of material were tested. In one of these, the presence of a containment wall built in 1992 for the protection of the Tarcogna hamlet was inserted. The results obtained identified the conditions of high risk affecting the villages of Funes and Lamosano and show that this Cellular Automata approach can have a wide range of applications for different types of mud/debris flows.
Modeling survival in colon cancer: a methodological review
Ahmed, Farid E; Vos, Paul W; Holbert, Don
2007-01-01
The Cox proportional hazards model is the most widely used model for survival analysis because of its simplicity. The fundamental assumption in this model is the proportionality of the hazard function. When this condition is not met, other modifications or other models must be used for analysis of survival data. We illustrate in this review several methodological approaches to deal with the violation of the proportionality assumption, using survival in colon cancer as an illustrative example. PMID:17295918
Moslehi, Nazanin; Ehsani, Behnaz; Mirmiran, Parvin; Hojjat, Parvane; Azizi, Fereidoun
2015-01-01
We aimed to investigate associations between dietary macronutrient proportions and prospective visceral adiposity index changes (ΔVAI). The study included 1254 adults (18–74 years), from the Tehran Lipid and Glucose Study (TLGS), who were followed for three years. Dietary intakes were assessed twice using food frequency questionnaires. Associations of dietary macronutrient with ΔVAI and risk of visceral adiposity dysfunction (VAD) after three years were investigated. The percentage of energy intake from protein in the total population, and from fat in women, were associated with higher increases in VAI. A 5% higher energy intake from protein substituted for carbohydrate, monounsaturated fatty acids (MUFAs), and polyunsaturated fatty acids (PUFAs) was associated with higher ΔVAI. Higher energy intake from animal protein substituted for PUFAs was positively associated with ΔVAI. Substituting protein and PUFAs with MUFAs were related to higher ΔVAI. The associations were similar in men and women, but reached significance mostly among women. Risk of VAD was increased when 1% of energy from protein was replaced with MUFAs. Substituting protein for carbohydrate and fat, and fat for carbohydrate, resulted in increased risk of VAD in women. Higher dietary proportions of protein and animal-derived MUFA may be positively associated with ΔVAI and risk of VAD. PMID:26516906
... you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint ...
NASA Astrophysics Data System (ADS)
Luzon, Paul Kenneth; Rochelle Montalbo, Kristina; Mahar Francisco Lagmay, Alfredo
2014-05-01
The 2006 Guinsaugon landslide in St. Bernard, Southern Leyte is the largest known mass movement of soil in the Philippines. It consisted of a 15 million m3 rockslide-debris avalanche from an approximately 700 m high escarpment produced by continuous movement of the Philippine fault at approximately 2.5 cm/year. The landslide was preceded by continuous heavy rainfall totaling 571.2 mm from February 8 to 12, 2006. The catastrophic landslide killed more than 1,000 people and displaced 19,000 residents over its 6,400 km path. To investigate the present-day morphology of the scar and potential failure that may occur, an analysis of a high-resolution digital elevation model (10 m resolution Synthetic Aperture Radar images in 2013) was conducted, leading to the generation of a structurally controlled landslide hazard map of the area. Discontinuity sets that could contribute to any failure mechanism were identified using Coltop 3D software which uses a unique lower Schmidt-Lambert color scheme for any given dip and dip direction. Thus, finding main morpho-structural orientations became easier. Matterocking, a software designed for structural analysis, was used to generate possible planes that could slide due to the identified discontinuity sets. Conefall was then utilized to compute the extent to which the rock mass will run out. The results showed potential instabilities in the scarp area of the 2006 Guinsaguon landslide and in adjacent slopes because of the presence of steep discontinuities that range from 45-60°. Apart from the 2006 Guinsaugon potential landslides, conefall simulation generated farther rock mass extent in adjacent slopes. In conclusion, there is a high probability of landslides in the municipality of St. Bernard Leyte, where the 2006 Guinsaugon Landslide occurred. Concerned agencies may use maps produced from this study for disaster preparedness and to facilitate long-term recovery planning for hazardous areas.
NASA Astrophysics Data System (ADS)
Dellino, Pierfrancesco; de Astis, Gianfilippo; La Volpe, Luigi; Mele, Daniela; Sulpizio, Roberto
2010-05-01
The analysis of stratigraphy and of pyroclastic deposits particle features allowed the reconstruction of the volcanic history of La Fossa di Vulcano. An eruptive scenario driven by superficial phreatomagmatic explosions emerged. A statistical analysis of the pyroclastic Successions led to define a repetitive sequence of dilute pyroclastic density currents as the most probable events at short term, followed by fallout of dense ballistic blocks. The scale of such events is related to the amount of magma involved in each explosion. Events involving a million of cubic meters of magma are probable in view of what happened in the most recent eruptions. They led to the formation of hundreds of meters thick dilute pyroclastic density currents, moving down the volcano slope at velocities exceeding 50 m/sec. The dispersion of desnity currents affected the whole Vulcano Porto area, the Vulcanello area and also overrode the Fossa Caldera's rim, spreading over the Piano area. Similarly, older pyroclastic deposits erupted at different times (Piano Grotte dei Rossi formation, ~20-7.7 ka) from vents within La Fossa Caldera and before La Fossa Cone formation. They also were phreatomagmatic in origin and fed dilute pyroclastic density currents (PDC). They represent the eruptions with the highest magnitude on the Island. Therefore, for the aim of hazard assessment, these deposits from La Fossa Cone and La Fossa Caldera were used to depict eruptive scenarios at short term and at long term. On the base of physical models that make use of pyroclastic deposits particle features, the impact parameters for each scenario have been calculated. They are dynamic pressure and particle volumetric concentration of density currents, and impact energy of ballistic blocks. On this base, a quantitative hazard map is presented, which could be of direct use for territory planning and for the calculation of the expected damage.
Conceptual model of volcanism and volcanic hazards of the region of Ararat valley, Armenia
NASA Astrophysics Data System (ADS)
Meliksetian, Khachatur; Connor, Charles; Savov, Ivan; Connor, Laura; Navasardyan, Gevorg; Manucharyan, Davit; Ghukasyan, Yura; Gevorgyan, Hripsime
2015-04-01
Armenia and the adjacent volcanically active regions in Iran, Turkey and Georgia are located in the collision zone between the Arabian and Eurasian lithospheric plates. The majority of studies of regional collision related volcanism use the model proposed by Keskin, (2003) where volcanism is driven by Neo-Tethyan slab break-off. In Armenia, >500 Quaternary-Holocene volcanoes from the Gegham, Vardenis and Syunik volcanic fields are hosted within pull-apart structures formed by active faults and their segments (Karakhanyan et al., 2002), while tectonic position of the large in volume basalt-dacite Aragats volcano and periphery volcanic plateaus is different and its position away from major fault lines necessitates more complex volcano-tectonic setup. Our detailed volcanological, petrological and geochemical studies provide insight into the nature of such volcanic activity in the region of Ararat Valley. Most magmas, such as those erupted in Armenia are volatile-poor and erupt fairly hot. Here we report newly discovered tephra sequences in Ararat valley, that were erupted from historically active Ararat stratovolcano and provide evidence for explosive eruption of young, mid K2O calc-alkaline and volatile-rich (>4.6 wt% H2O; amph-bearing) magmas. Such young eruptions, in addition to the ignimbrite and lava flow hazards from Gegham and Aragats, present a threat to the >1.4 million people (~ ½ of the population of Armenia). We will report numerical simulations of potential volcanic hazards for the region of Ararat valley near Yerevan that will include including tephra fallout, lava flows and opening of new vents. Connor et al. (2012) J. Applied Volcanology 1:3, 1-19; Karakhanian et al. (2002), JVGR, 113, 319-344; Keskin, M. (2003) Geophys. Res. Lett. 30, 24, 8046.
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
Non-Volcanic release of CO2 in Italy: quantification, conceptual models and gas hazard
NASA Astrophysics Data System (ADS)
Chiodini, G.; Cardellini, C.; Caliro, S.; Avino, R.
2011-12-01
Central and South Italy are characterized by the presence of many reservoirs naturally recharged by CO2 of deep provenance. In the western sector, the reservoirs feed hundreds of gas emissions at the surface. Many studies in the last years were devoted to (i) elaborating a map of CO2 Earth degassing of the region; (ii) to asses the gas hazard; (iii) to develop methods suitable for the measurement of the gas fluxes from different types of emissions; (iv) to elaborate the conceptual model of Earth degassing and its relation with the seismic activity of the region and (v) to develop physical numerical models of CO2 air dispersion. The main results obtained are: 1) A general, regional map of CO2 Earth degassing in Central Italy has been elaborated. The total flux of CO2 in the area has been estimated in ~ 10 Mt/a which are released to the atmosphere trough numerous dangerous gas emissions or by degassing spring waters (~ 10 % of the CO2 globally estimated to be released by the Earth trough volcanic activity). 2) An on line, open access, georeferenced database of the main CO2 emissions (~ 250) was settled up (http://googas.ov.ingv.it). CO2 flux > 100 t/d characterise 14% of the degassing sites while CO2 fluxes from 100 t/d to 10 t/d have been estimated for about 35% of the gas emissions. 3) The sites of the gas emissions are not suitable for life: the gas causes many accidents to animals and people. In order to mitigate the gas hazard a specific model of CO2 air dispersion has been developed and applied to the main degassing sites. A relevant application regarded Mefite d'Ansanto, southern Apennines, which is the largest natural emission of low temperature CO2 rich gases, from non-volcanic environment, ever measured in the Earth (˜2000 t/d). Under low wind conditions, the gas flows along a narrow natural channel producing a persistent gas river which has killed over a period of time many people and animals. The application of the physical numerical model allowed us to
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; Baker, Nathan A.; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto D.; Harper, Stacey
2015-06-04
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; Baker, Nathan A.; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto D.; Harper, Stacey
2015-06-04
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
Hanna, S.R.; Messier, T.; Schulman, L.L.
1988-10-01
There are currently available many microcomputer-based hazard-response models for calculating concentrations of hazardous chemicals in the atmosphere. The uncertainties associated with these models are not well-known, and they have not been adequately evaluated and compared using statistical procedures where confidence limits are determined. The U.S. Air Force has a need for an objective method for evaluating these models, and this project provides a framework for performing these analyses and estimating the modele uncertainties. As part of this research, available models and data sets were collected, methods for estimating uncertainties due to data-input errors and stochastic effects were developed, a framework for model evaluation was put together, and preliminary applications using test data sets took place.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Combining SLBL routine with landslide-generated tsunami model for a quick hazard assessment tool
NASA Astrophysics Data System (ADS)
Franz, Martin; Rudaz, Benjamin; Jaboyedoff, Michel; Podladchikov, Yury
2016-04-01
Regions with steep topography are potentially subject to landslide-induced tsunami, because of the proximity between lakes, rivers, sea shores and potential instabilities. The concentration of the population and infrastructures on the water body shores and downstream valleys could lead to catastrophic consequences. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a tool which allows the construction of the landslide geometry, and which is able to simulate its propagation, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. The tool is developed in the Matlab© environment, with a graphical user interface (GUI) to select the parameters in a user-friendly manner. The whole process is done in three steps implying different methods. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The intensity map is based on the criterion of flooding in Switzerland provided by the OFEG and results from the multiplication of the velocity and the depth obtained during the simulation. The tool can be used for hazard assessment in the case of well-known landslides, where the SLBL routine can be constrained and checked for realistic construction of the geometrical model. In less-known cases, various failure plane geometries can be automatically built between given range and thus a multi-scenario approach is used. In any case, less-known parameters such as the landslide velocity, its run-out distance, etc. can also be set to vary within given ranges, leading to multi
Enhancing Students' Understanding of Risk and Geologic Hazards Using a Dartboard Model.
ERIC Educational Resources Information Center
Lutz, Timothy M.
2001-01-01
Uses dartboards to represent magnitude-frequency relationships of natural hazards which engage students at different levels of preparation in different contexts, and for different lengths of time. Helps students to mitigate the misconceptions that processes occur periodically by emphasizing the random nature of hazards. Includes 12 references.…
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood
Long-Term Slip History Discriminates Among Occurrence Models for Seismic Hazard Assessment
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Ferry, M. A.; Jalobeanu, A.
2010-12-01
Today, the probabilistic seismic hazard assessment (PSHA) community relies on one or a combination of stochastic models to compute occurrence probabilities for large earthquakes. Considerable efforts have been devoted to extracting the maximum information from long catalogues of large earthquakes (CLE) based on instrumental, historical, archeological and paleoseismological data (Biasi et al, 2009, Parsons, 2008, Rhoades and Dissen 2003). However, the models remain only and insufficiently constrained by these rare single-slip event data. Therefore, the selection of the models and their respective weights is necessarily left with the appreciation of a panel of experts (WGCEP, 2003). Since cumulative slip data with high temporal and spatial resolution are now available, we propose here a new approach to incorporate these pieces of evidence of mid- to long-term fault behavior into the next generation of PSHA: the Cumulative Offset-Based Bayesian Recurrence Analysis (COBBRA). Applied to the Jordan Valley segment of the Dead Sea Fault, the method yields the best combination of occurrence models for full-segment ruptures knowing the available single-event and cumulative data. Not only does our method provide data-driven, objective weights to the competing models, but it also allows to rule out time-independence, and to compute the cumulative probability of occurrence for the next full-segment event reflecting all available data. References: Biasi, G. P. & Weldon, R. J., II. Bull. Seism. Soc. Am. 99, 471-498, doi:10.1785/0120080287 (2009). Parsons, T. J. Geophys. Res., 113, doi:10.1029/2007JB004,998.216 (2008) Rhoades, D. A., and R. J. V. Dissen, New Zealand Journal of Geology & Geophysics, 46, 479-488 (2003). Working Group On California Earthquake Probabilities. Earthquake Probabilities in the San Francisco Bay Region: 2002-2031. (2003).
NASA Astrophysics Data System (ADS)
Climent, A.; Benito, M. B.; Piedra, R.; Lindholm, C.; Gaspar-Escribano, J.
2013-05-01
We present the results of a study aimed at choosing the more suitable strong-motion models for seismic hazard analysis in the Central America (CA) Region. After a careful revision of the state of the art, different models developed for subduction and volcanic crustal zones, in tectonic environment similar to those of CA, were selected. These models were calibrated with accelerograms recorded in Costa Rica, Nicaragua and El Salvador. The peak ground acceleration PGA and Spectral Acceleration SA (T) derived from the records were compared with the ones predicted by the models in similar conditions of magnitude, distance and soil. The type of magnitude (Ms, Mb, MW), distance (Rhyp, Rrup, etc) and ground motion parameter (maximum horizontal component, geometrical mean, etc ) was taken into account in the comparison with the real data. As results of the analysis, the models which present a best fit with the local data were identified. These models have been applied for carrying out seismic hazard analysis in the region, in the frame of the RESIS II project financed by the Norwegian Foreign Department and also by the Spanish project SISMOCAES. The methodology followed is based on the direct comparison between PGA and SA 5 % damped response values extracted from actual records with the corresponding acceleration values predicted by the selected ground-motion models for similar magnitude, distance and soil conditions. Residuals between observed and predicted values for PGA, and SA (1sec) are calculated and plotted as a function of distance and magnitude, analyzing their deviation from the mean value. Besides and most important, a statistical analysis of the normalized residuals was carry out using the criteria proposed by Scherbaum et al. (2004), which consists in categorizing ground motion models based in a likelihood parameter that reflects the goodness-of-fit of the median values as well as the shape of the underlying distribution of ground motion residuals. Considering
NASA Astrophysics Data System (ADS)
Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo
2013-04-01
The Cordilleras in Peru are especially vulnerable to, and affected by impacts from climate change. Local communities and cities often exist directly within the reach of major hazard potentials such as lake outburst floods (aluviones), mud-/debris flows (huaycos) or large rock-/ice avalanches. They have been repeatedly and strongly affected these regions over the last decades and since the last century, and thousands of people have been killed. One of the most recent events in the Cordillera Blanca occurred on 11 April 2010, when a rock/ice avalanche from the top of Hualcán mountain, NE of the town of Carhuaz impacted the glacier lake 513 (Laguna 513), caused displacement waves and triggered an outburst flood wave. The flow repeatedly transformed from debris flow to hyperconcentrated flow and eventually caused significant damage in Carhuaz. This event was motivation to start early warning and prevention efforts to reduce risks related to ice/rock avalanches and glacier lake outburst floods (GLOF). One of the basic components of an early warning system is the assessment, understanding and communication of relevant hazards and risks. Here we report on the methodology and results of generating GLOF related hazard maps for Carhuaz based on numerical modeling and field work. This exercise required an advanced concept and implementation of different mass movement models. Specifically, numerical models were applied for simulating avalanche flow, avalanche lake impact, displacement wave generation and lake overtopping, and eventually flow propagation of the outburst flood with changing rheology between debris flow and hyperconcentrated flows. We adopted a hazard mapping procedure slightly adjusted adjusted from guidelines developed in Switzerland and in the Andes region. A methodology has thereby been developed to translate results from numerical mass movement modeling into hazard maps. The resulting hazard map was verified and adjusted during field work. This study shows
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, L.; Vogel, R. M.
2015-12-01
Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.
Ranking of several ground-motion models for seismic hazard analysis in Iran
NASA Astrophysics Data System (ADS)
Ghasemi, H.; Zare, M.; Fukushima, Y.
2008-09-01
In this study, six attenuation relationships are classified with respect to the ranking scheme proposed by Scherbaum et al (2004 Bull. Seismol. Soc. Am. 94 1-22). First, the strong motions recorded during the 2002 Avaj, 2003 Bam, 2004 Kojour and 2006 Silakhor earthquakes are consistently processed. Then the normalized residual sets are determined for each selected ground-motion model, considering the strong-motion records chosen. The main advantage of these records is that corresponding information about the causative fault plane has been well studied for the selected events. Such information is used to estimate several control parameters which are essential inputs for attenuation relations. The selected relations (Zare et al (1999 Soil Dyn. Earthq. Eng. 18 101-23) Fukushima et al (2003 J. Earthq. Eng. 7 573-98) Sinaeian (2006 PhD Thesis International Institute of Earthquake Engineering and Seismology, Tehran, Iran); Boore and Atkinson (2007 PEER, Report 2007/01); Campbell and Bozorgnia (2007 PEER, Report 2007/02); and Chiou and Youngs (2006 PEER Interim Report for USGS Review)) have been deemed suitable for predicting peak ground-motion amplitudes in the Iranian plateau. Several graphical techniques and goodness-of-fit measures are also applied for statistical distribution analysis of the normalized residual sets. Such analysis reveals ground-motion models, developed using Iranian strong-motion records as the most appropriate ones in the Iranian context. The results of the present study are applicable in seismic hazard assessment projects in Iran.
Geochemical transformations and modeling of two deep-well injected hazardous wastes
Roy, W.R.; Seyler, B.; Steele, J.D.; Mravik, S.C.; Moore, D.M.; Krapac, I.G.; Peden, J.M.; Griffin, R.A.
1991-01-01
Two liquid hazardous wastes (an alkaline brine-like solution and a dilute acidic waste) were mixed with finely ground rock samples of three injection-related lithologies (sandstone, dolomite, and siltstone) for 155 to 230 days at 325??K-10.8 MPa. The pH and inorganic chemical composition of the alkaline waste were not significantly altered by any of the rock samples after 230 days of mixing. The acidic waste was neutralized as a consequence of carbonate dissolution, ion exchange, or clay-mineral dissolution, and hence was transformed into a nonhazardous waste. Mixing the alkaline waste with the solid phases yielded several reaction products: brucite, Mg(OH)2; calcite, CaCO3; and possibly a type of sodium metasilicate. Clay-like minerals formed in the sandstone, and hydrotalcite, Mg6Al2-CO3(OH)16??4H2O, may have formed in the siltstone at trace levels. Mixing the alkaline waste with a synthetic brine yielded brucite, calcite, and whewellite (CaC2O4??H2O). The thermodynamic model PHRQPITZ predicted that brucite and calcite would precipitate from solution in the dolomite and siltstone mixtures and in the alkaline waste-brine system. The dilute acidic waste did not significantly alter the mineralogical composition of the three rock types after 155 days of contact. The model PHREEQE indicated that the calcite was thermodynamically stable in the dolomite and siltstone mixtures.
Probabilistic tsunami hazard analysis (PTHA) of Taiwan region by stochastic model
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Chen, P. F.; Chen, C. C.
2014-12-01
We conduct probabilistic tsunami hazard analysis (PTHA) of Taiwan region for earthquake sources in the Ryukyu trench. The PTHA estimates the probabilities of a site hit by tsunamis with certain amplitudes threshold. The probabilities were integrated over earthquakes of various magnitudes from potential fault zones in the Ryukyu trench. The annual frequencies of earthquakes in a fault zone are determined or extrapolated by magnitude-frequency distributions of earthquakes (Gutenberg-Richter law) of the zone. Given moment (or magnitude) of an earthquake, we first synthesize patterns of differently complex and heterogeneous slip distributions on the fault using stochastic model. Assuming the slip and stress drop distribution are processes of fractional Brownian motion and described by Hurt exponent. According to ω-2 model of earthquakes and following Fourier transform, slip distributions of earthquake are determined by randomly distributing phase spectrum of those with greater than corner wave number kc. Finally, the vertical seafloor displacements induced by each slip distribution are used by COMCOT for simulation of tsunami to assess the impacts on various coasts in Taiwan.
Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration
NASA Astrophysics Data System (ADS)
Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim
2015-04-01
In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.
A method for estimating proportions
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
A proportion estimation procedure is presented which requires only on set of ground truth data for determining the error matrix. The error matrix is then used to determine an unbiased estimate. The error matrix is shown to be directly related to the probability of misclassifications, and is more diagonally dominant with the increase in the number of passes used.
Proportional Reasoning with a Pyramid
ERIC Educational Resources Information Center
Mamolo, Ami; Sinclair, Margaret; Whiteley, Walter J.
2011-01-01
Proportional reasoning pops up in math class in a variety of places, such as while making scaled drawings; finding equivalent fractions; converting units of measurement; comparing speeds, prices, and rates; and comparing lengths, areas, and volume. Students need to be exposed to a variety of representations to develop a sound understanding of this…
Social Justice and Proportional Reasoning
ERIC Educational Resources Information Center
Simic-Muller, Ksenija
2015-01-01
Ratio and proportional reasoning tasks abound that have connections to real-world situations. Examples in this article demonstrate how textbook tasks can easily be transformed into authentic real-world problems that shed light on issues of equity and fairness, such as population growth and crime rates. A few ideas are presented on how teachers can…
NASA Astrophysics Data System (ADS)
Anderson, E. R.; Griffin, R.; Irwin, D.
2013-12-01
Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values
ERIC Educational Resources Information Center
De Bock, Dirk; Van Dooren, Wim; Verschaffel, Lieven
2015-01-01
We investigated students' understanding of proportional, inverse proportional, and affine functions and the way this understanding is affected by various external representations. In a first study, we focus on students' ability to model textual descriptions of situations with different kinds of representations of proportional, inverse…
Application of physical erosion modelling to derive off-site muddy flood hazard
NASA Astrophysics Data System (ADS)
Annika Arevalo, Sarah; Schmidt, Jürgen
2015-04-01
Muddy floods are local inundation events after heavy rain storms. They occur inside watersheds before the runoff reaches a river. The sediment is eroded from agricultural fields and transported with the surface runoff into adjacent residential areas. The environment where muddy floods occur is very small scaled. The damages related to muddy floods are caused by the runoff-water (flooded houses and cellars) and the transported sediment that is deposited on infrastructure and private properties. There are a variety of factors that drive the occurrence of muddy floods. The spatial extend is rather small and the distribution is very heterogeneous. This makes the prediction of the precise locations that are endangered by muddy flooding a challenge. The aim of this investigation is to identify potential hazard areas that might suffer muddy flooding out of modelled soil erosion data. For the German state of Saxony there is a modelled map of soil erosion and particle transport available. The model applied is EROSION 3D. The spatial resolution is a 20 m raster and the conditions assumed are a 10 year rainfall event on uncovered agricultural soils. A digital landuse map is edified, containing the outer borders of potential risk elements (residential and industrial areas, streets, railroads, etc.) that can be damaged by muddy flooding. The landuse map is merged with the transported sediment map calculated with EROSION 3D. The result precisely depicts the locations where high amounts of sediments might be transported into urban areas under worst case conditions. This map was validated with observed muddy flood events that proved to coincide very well with areas predicted to have a potentially high sediment input.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Modelling short term individual exposure from airborne hazardous releases in urban environments.
Bartzis, J G; Efthimiou, G C; Andronopoulos, S
2015-12-30
A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a significant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the first attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the field Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model refinements. PMID:26184800
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
NASA Astrophysics Data System (ADS)
Mercado, A., Jr.
2015-12-01
The island of Puerto Rico is not only located in the so-called Caribbean hurricane alley, but is also located in a tsunami prone region. And both phenomena have affected the island. For the past few years we have undergone the task of upgrading the available coastal flood maps due to storm surges and tsunamis. This has been done taking advantage of new Lidar-derived, high resolution, topography and bathymetry and state-of-the-art models (MOST for tsunamis and ADCIRC/SWAN for storm surges). The tsunami inundation maps have been converted to evacuation maps. In tsunamis we are also working in preparing hazard maps due to tsunami currents inside ports, bays, and marinas. The storm surge maps include two scenarios of sea level rise: 0.5 and 1.0 m above Mean High Water. All maps have been adopted by the Puerto Rico State Emergency Management Agency, and are publicly available through the Internet. It is the purpose of this presentation to summarize how it has been done, the spin-off applications they have generated, and how we plan to improve coastal flooding predictions.
Numerical modeling of marine Gravity data for tsunami hazard zone mapping
NASA Astrophysics Data System (ADS)
Porwal, Nipun
2012-07-01
Tsunami is a series of ocean wave with very high wavelengths ranges from 10 to 500 km. Therefore tsunamis act as shallow water waves and hard to predict from various methods. Bottom Pressure Recorders of Poseidon class considered as a preeminent method to detect tsunami waves but Acoustic Modem in Ocean Bottom Pressure (OBP) sensors placed in the vicinity of trenches having depth of more than 6000m fails to propel OBP data to Surface Buoys. Therefore this paper is developed for numerical modeling of Gravity field coefficients from Bureau Gravimetric International (BGI) which do not play a central role in the study of geodesy, satellite orbit computation, & geophysics but by mathematical transformation of gravity field coefficients using Normalized Legendre Polynomial high resolution ocean bottom pressure (OBP) data is generated. Real time sea level monitored OBP data of 0.3° by 1° spatial resolution using Kalman filter (kf080) for past 10 years by Estimating the Circulation and Climate of the Ocean (ECCO) has been correlated with OBP data from gravity field coefficients which attribute a feasible study on future tsunami detection system from space and in identification of most suitable sites to place OBP sensors near deep trenches. The Levitus Climatological temperature and salinity are assimilated into the version of the MITGCM using the ad-joint method to obtain the sea height segment. Then TOPEX/Poseidon satellite altimeter, surface momentum, heat, and freshwater fluxes from NCEP reanalysis product and the dynamic ocean topography DOT_DNSCMSS08_EGM08 is used to interpret sea-bottom elevation. Then all datasets are associated under raster calculator in ArcGIS 9.3 using Boolean Intersection Algebra Method and proximal analysis tools with high resolution sea floor topographic map. Afterward tsunami prone area and suitable sites for set up of BPR as analyzed in this research is authenticated by using Passive microwave radiometry system for Tsunami Hazard Zone
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
Numerical Stress Field Modelling: from geophysical observations toward volcano hazard assessment
NASA Astrophysics Data System (ADS)
Currenti, Gilda; Coco, Armando; Privitera, Emanuela
2015-04-01
. Numerical results show the contribution of groundwater head gradients associated with topographically induced flow and pore-pressure changes, providing a quantitative estimate for deformation and failure of volcano edifice. The comparison between the predictions of the model and the observations can provide valuable insights about the stress state of the volcano and, hence, about the likelihood of an impending eruption. This innovative approach opens up new perspectives in geodetic inverse modelling and poses the basis for future development in a volcano hazard assessment based on a critical combination of geophysical observations and numerical modelling.
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Proportional counter as neutron detector
NASA Technical Reports Server (NTRS)
Braby, L. A.; Badhwar, G. D.
2001-01-01
A technique to separate out the dose, and lineal energy spectra of neutrons and charged particles is described. It is based on using two proportional counters, one with a wall, and the other with similar characteristics but wall made from a non-hydrogen containing material. Results of a calibration in a neutron field are also shown. c2001 Elsevier Science Ltd. All rights reserved.
Metacarpal proportions in Australopithecus africanus.
Green, David J; Gordon, Adam D
2008-05-01
Recent work has shown that, despite being craniodentally more derived, Australopithecus africanus had more apelike limb-size proportions than A. afarensis. Here, we test whether the A. africanus hand, as judged by metacarpal shaft and articular proportions, was similarly apelike. More specifically, did A. africanus have a short and narrow first metacarpal (MC1) relative to the other metacarpals? Proportions of both MC breadth and length were considered: the geometric mean (GM) of articular and midshaft measurements of MC1 breadth was compared to those of MC2-4, and MC1 length was compared to MC3 length individually and also to the GM of MC2 and 3 lengths. To compare the extant hominoid sample with an incomplete A. africanus fossil record (11 attributed metacarpals), a resampling procedure imposed sampling constraints on the comparative groups that produced composite intrahand ratios. Resampled ratios in the extant sample are not significantly different from actual ratios based on associated elements, demonstrating the methodological appropriateness of this technique. Australopithecus africanus metacarpals do not differ significantly from the great apes in the comparison of breadth ratios but are significantly greater than chimpanzees and orangutans in both measures of relative length. Conversely, A. africanus has a significantly smaller breadth ratio than modern humans, but does not significantly differ from this group in either measure of relative length. We conclude that the first metacarpals of A. africanus are more apelike in relative breadth while also being more humanlike in relative length, a finding consistent with previous work on A. afarensis hand proportions. This configuration would have likely promoted a high degree of manipulative dexterity, but the relatively slender, apelike first metacarpal suggests that A. africanus did not place the same mechanical demands on the thumb as more recent, stone-tool-producing hominins. PMID:18191176
A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard
NASA Astrophysics Data System (ADS)
Alaeddine, H.; Serrhini, K.; Maizia, M.
2015-03-01
Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.
NASA Astrophysics Data System (ADS)
Konstantinou, Konstantinos
2015-04-01
Volcanic Ballistic Projectiles (VBPs) are rock/magma fragments of variable size that are ejected from active vents during explosive eruptions. VBPs follow almost parabolic trajectories that are influenced by gravity and drag forces before they reach their impact point on the Earth's surface. Owing to their high temperature and kinetic energies, VBPs can potentially cause human casualties, severe damage to buildings as well as trigger fires. Since the Minoan eruption the Santorini caldera has produced several smaller (VEI = 2-3) vulcanian eruptions, the last of which occurred in 1950, while in 2011 it also experienced significant deformation/seismicity even though no eruption eventually occurred. In this work, an eruptive model appropriate for vulcanian eruptions is used to estimate initial conditions (ejection height, velocity) for VBPs assuming a broad range of gas concentration/overpressure in the vent. These initial conditions are then inserted into a ballistic model for the purpose of calculating the maximum range of VBPs for different VBP sizes (0.35-3 m), varying drag coefficient as a function of VBP speed and varying air density as a function of altitude. In agreement with previous studies a zone of reduced drag is also included in the ballistic calculations that is determined based on the size of vents that were active in the Kameni islands during previous eruptions (< 1 km). Results show that the horizontal range of VBPs varies between 0.9-3 km and greatly depends on gas concentration, the extent of the reduced drag zone and the size of VBP. Hazard maps are then constructed by taking into account the maximum horizontal range values as well as potential locations of eruptive vents along a NE-SW direction around the Kameni islands (the so-called "Kameni line").
Landslide tsunami hazard in New South Wales, Australia: novel observations from 3D modelling
NASA Astrophysics Data System (ADS)
Power, Hannah; Clarke, Samantha; Hubble, Tom
2015-04-01
This paper examines the potential of tsunami inundation generated from two case study sites of submarine mass failures on the New South Wales coast of Australia. Two submarine mass failure events are investigated: the Bulli Slide and the Shovel Slide. Both slides are located approximately 65 km southeast of Sydney and 60 km east of the township of Wollongong. The Bulli Slide (~20 km3) and the Shovel Slide (7.97 km3) correspond to the two largest identified erosional surface submarine landslides scars of the NSW continental margin (Glenn et al. 2008; Clarke 2014) and represent examples of large to very large submarine landslide scars. The Shovel Slide is a moderately thick (80-165 m), moderately wide to wide (4.4 km) slide, and is located in 880 m water depth; and the Bulli Slide is an extremely thick (200-425 m), very wide (8.9 km) slide, and is located in 1500 m water depth. Previous work on the east Australian margin (Clarke et al., 2014) and elsewhere (Harbitz et al., 2013) suggests that submarine landslides similar to the Bulli Slide or the Shovel Slide are volumetrically large enough and occur at shallow enough water depths (400-2500 m) to generate substantial tsunamis that could cause widespread damage on the east Australian coast and threaten coastal communities (Burbidge et al. 2008; Clarke 2014; Talukder and Volker 2014). Currently, the tsunamogenic potential of these two slides has only been investigated using 2D modelling (Clarke 2014) and to date it has been difficult to establish the onshore tsunami surge characteristics for the submarine landslides with certainty. To address this knowledge gap, the forecast inundation as a result of these two mass failure events was investigated using a three-dimensional model (ANUGA) that predicts water flow resulting from natural hazard events such as tsunami (Nielsen et al., 2005). The ANUGA model solves the two-dimensional shallow water wave equations and accurately models the process of wetting and drying thus
NASA Astrophysics Data System (ADS)
Taheri Andani, Masood; Elahinia, Mohammad
2014-01-01
In this work, a modified 3D model is presented to capture the multi-axial behavior of superelastic shape memory alloys (SMAs) under quasi-static isothermal or dynamic loading conditions. General experimental based equivalent stress and strain terms are introduced and improved flow rule and transformation surfaces are presented. The 3D constitutive equations are found for both isothermal and dynamic loading states. An extended experimental study is conducted on NiTi thin walled tubes to investigate the performance of the model. The proposed approach is shown to be able to capture the SMA response better than the original model in tension-torsion loading conditions.
The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.
2005-12-01
The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy
NASA Astrophysics Data System (ADS)
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations
Lima, M Lourdes; Romanelli, Asunción; Massone, Héctor E
2013-06-01
This paper gives an account of the implementation of a decision support system for assessing aquifer pollution hazard and prioritizing subwatersheds for groundwater resources management in the southeastern Pampa plain of Argentina. The use of this system is demonstrated with an example from Dulce Stream Basin (1,000 km(2) encompassing 27 subwatersheds), which has high level of agricultural activities and extensive available data regarding aquifer geology. In the logic model, aquifer pollution hazard is assessed as a function of two primary topics: groundwater and soil conditions. This logic model shows the state of each evaluated landscape with respect to aquifer pollution hazard based mainly on the parameters of the DRASTIC and GOD models. The decision model allows prioritizing subwatersheds for groundwater resources management according to three main criteria including farming activities, agrochemical application, and irrigation use. Stakeholder participation, through interviews, in combination with expert judgment was used to select and weight each criterion. The resulting subwatershed priority map, by combining the logic and decision models, allowed identifying five subwatersheds in the upper and middle basin as the main aquifer protection areas. The results reasonably fit the natural conditions of the basin, identifying those subwatersheds with shallow water depth, loam-loam silt texture soil media and pasture land cover in the middle basin, and others with intensive agricultural activity, coinciding with the natural recharge area to the aquifer system. Major difficulties and some recommendations of applying this methodology in real-world situations are discussed. PMID:23054292
Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik
2014-01-01
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20–45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5–30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies. PMID:24344290
NASA Astrophysics Data System (ADS)
Zhang, Baoqing; Wu, Pute; Zhao, Xining; Wang, Yubao; Gao, Xiaodong; Cao, Xinchun
2013-10-01
Drought is a complex natural hazard that is poorly understood and difficult to assess. This paper describes a VIC-PDSI model approach to understanding drought in which the Variable Infiltration Capacity (VIC) Model was combined with the Palmer Drought Severity Index (PDSI). Simulated results obtained using the VIC model were used to replace the output of the more conventional two-layer bucket-type model for hydrological accounting, and a two-class-based procedure for calibrating the characteristic climate coefficient ( K j ) was introduced to allow for a more reliable computation of the PDSI. The VIC-PDSI model was used in conjunction with GIS technology to create a new drought assessment index (DAI) that provides a comprehensive overview of drought duration, intensity, frequency, and spatial extent. This new index was applied to drought hazard assessment across six subregions of the whole Loess Plateau. The results show that the DAI over the whole Loess Plateau ranged between 11 and 26 (the greater value of the DAI means the more severe of the drought hazard level). The drought hazards in the upper reaches of Yellow River were more severe than that in the middle reaches. The drought prone regions over the study area were mainly concentrated in Inner Mongolian small rivers, Zuli and Qingshui Rivers basin, while the drought hazards in the drainage area between Hekouzhen-Longmen and Weihe River basin were relatively mild during 1971-2010. The most serious drought vulnerabilities were associated with the area around Lanzhou, Zhongning, and Yinchuan, where the development of water-saving irrigation is the most direct and effective way to defend against and reduce losses from drought. For the relatively humid regions, it will be necessary to establish the rainwater harvesting systems, which could help to relieve the risk of water shortage and guarantee regional food security. Due to the DAI considers the multiple characteristic of drought duration, intensity, frequency
NASA Astrophysics Data System (ADS)
Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego
2014-05-01
The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
NASA Astrophysics Data System (ADS)
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
BOUMAN, Peter; MENG, Xiao-Li; DIGNAM, James; DUKIĆ, Vanja
2014-01-01
In multicenter studies, one often needs to make inference about a population survival curve based on multiple, possibly heterogeneous survival data from individual centers. We investigate a flexible Bayesian method for estimating a population survival curve based on a semiparametric multiresolution hazard model that can incorporate covariates and account for center heterogeneity. The method yields a smooth estimate of the survival curve for “multiple resolutions” or time scales of interest. The Bayesian model used has the capability to accommodate general forms of censoring and a priori smoothness assumptions. We develop a model checking and diagnostic technique based on the posterior predictive distribution and use it to identify departures from the model assumptions. The hazard estimator is used to analyze data from 110 centers that participated in a multicenter randomized clinical trial to evaluate tamoxifen in the treatment of early stage breast cancer. Of particular interest are the estimates of center heterogeneity in the baseline hazard curves and in the treatment effects, after adjustment for a few key clinical covariates. Our analysis suggests that the treatment effect estimates are rather robust, even for a collection of small trial centers, despite variations in center characteristics. PMID:25620824
Photodetectors for Scintillator Proportionality Measurement
Moses, William W.; Choong, Woon-Seng; Hull, Giulia; Payne, Steve; Cherepy, Nerine; Valentine, J.D.
2010-10-18
We evaluate photodetectors for use in a Compton Coincidence apparatus designed for measuring scintillator proportionality. There are many requirements placed on the photodetector in these systems, including active area, linearity, and the ability to accurately measure low light levels (which implies high quantum efficiency and high signal-to-noise ratio). Through a combination of measurement and Monte Carlo simulation, we evaluate a number of potential photodetectors, especially photomultiplier tubes and hybrid photodetectors. Of these, we find that the most promising devices available are photomultiplier tubes with high ({approx}50%) quantum efficiency, although hybrid photodetectors with high quantum efficiency would be preferable.
Modeling and forecasting tephra hazards at Redoubt Volcano, Alaska, during 2009 unrest and eruption
NASA Astrophysics Data System (ADS)
Mastin, L. G.; Denlinger, R. P.; Wallace, K. L.; Schaefer, J. R.
2009-12-01
In late 2008, Redoubt Volcano, on the west coast of Alaska’s Cook Inlet, began a period of unrest that culminated in more than 19 small tephra-producing events between March 19 and April 4, 2009, followed by growth of a lava dome whose volume now exceeds 70 million cubic meters. The explosive events lasted from <1 to 31 minutes, sent tephra columns to heights of 19 km asl, and emitted dense-rock (DRE) tephra volumes up to several million cubic meters. Tephra fall affected transportation and infrastructure throughout Cook Inlet, including the Anchorage metropolitan area. The months of unrest that preceded the first explosive event allowed us to develop tools to forecast tephra hazards. As described in an accompanying abstract, colleagues at the University of Pisa produced automated, daily tephra-fall forecast maps using the 3-D VOL-CALPUFF model with input scenarios that represented likely event sizes and durations. Tephra-fall forecast maps were also generated every six hours for hypothetical events of 10M m3 volume DRE using the 2-D model ASHFALL, and relationships between hypothetical plume height and eruption rate were evaluated four times daily under then-current atmospheric conditions using the program PLUMERIA. Eruptive deposits were mapped and isomass contours constructed for the two largest events, March 24 (0340-0355Z) and April 4 (1358-1429Z), which produced radar-determined plume heights of 18.3 and 15.2 km asl (~15.6 and 12.5 km above the vent), and tephra volumes (DRE) of 6.3M and 3.1M m3, respectively. For the volumetric eruption rates calculated from mapped erupted volume and seismic duration (V=6.2×103 and 1.7×103 m3/s DRE), measured plume heights H above the vent fall within 10% of the empirical best-fit curve H=1.67V0.259 published in the book Volcanic Plumes by Sparks et al. (1997, eq. 5.1). The plume heights are slightly higher than (but still within 13% of) the 14.6 and 11.1 km predicted by PLUMERIA under the existing atmospheric conditions
NASA Astrophysics Data System (ADS)
Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele
2016-04-01
Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
NASA Astrophysics Data System (ADS)
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
... such as lead and mercury Chemicals such as pesticides Cigarettes Some viruses Alcohol For men, a reproductive hazard can affect the sperm. For a woman, a reproductive hazard can cause different effects during pregnancy, depending on when she is exposed. ...
NASA Astrophysics Data System (ADS)
Schneider, D.; Huggel, C.; Cochachin, A.; Guillén, S.; García, J.
2014-01-01
Recent warming has had enormous impacts on glaciers and high-mountain environments. Hazards have changed or new ones have emerged, including those from glacier lakes that form as glaciers retreat. The Andes of Peru have repeatedly been severely impacted by glacier lake outburst floods in the past. An important recent event occurred in the Cordillera Blanca in 2010 when an ice avalanche impacted a glacier lake and triggered an outburst flood that affected the downstream communities and city of Carhuaz. In this study we evaluate how such complex cascades of mass movement processes can be simulated coupling different physically-based numerical models. We furthermore develop an approach that allows us to elaborate corresponding hazard maps according to existing guidelines for debris flows and based on modelling results and field work.
NASA Astrophysics Data System (ADS)
Khare, S.; Bonazzi, A.; Mitas, C.; Jewson, S.
2014-08-01
In this paper, we present a novel framework for modelling clustering in natural hazard risk models. The framework we present is founded on physical principles where large-scale oscillations in the physical system is the source of non-Poissonian (clustered) frequency behaviour. We focus on a particular mathematical implementation of the "Super-Cluster" methodology that we introduce. This mathematical framework has a number of advantages including tunability to the problem at hand, as well as the ability to model cross-event correlation. Using European windstorm data as an example, we provide evidence that historical data show strong evidence of clustering. We then develop Poisson and clustered simulation models for the data, demonstrating clearly the superiority of the clustered model which we have implemented using the Poisson-Mixtures approach. We then discuss the implications of including clustering in models of prices on catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of new insights into the impact clustering has on modelled catXL contract prices. The simple model presented in this paper provides an insightful starting point for practicioners of natural hazard risk modelling.
Khosravi, Bahareh; Pourahmad, Saeedeh; Bahreini, Amin; Nikeghbalian, Saman; Mehrdad, Goli
2015-01-01
Background: Transplantation is the only treatment for patients with liver failure. Since the therapy imposes high expenses to the patients and community, identification of effective factors on survival of such patients after transplantation is valuable. Objectives: The current study attempted to model the survival of patients (two years old and above) after liver transplantation using neural network and Cox Proportional Hazards (Cox PH) regression models. The event is defined as death due to complications of liver transplantation. Patients and Methods: In a historical cohort study, the clinical findings of 1168 patients who underwent liver transplant surgery (from March 2008 to march 2013) at Shiraz Namazee Hospital Organ Transplantation Center, Shiraz, Southern Iran, were used. To model the one to five years survival of such patients, Cox PH regression model accompanied by three layers feed forward artificial neural network (ANN) method were applied on data separately and their prediction accuracy was compared using the area under the receiver operating characteristic curve (ROC). Furthermore, Kaplan-Meier method was used to estimate the survival probabilities in different years. Results: The estimated survival probability of one to five years for the patients were 91%, 89%, 85%, 84%, and 83%, respectively. The areas under the ROC were 86.4% and 80.7% for ANN and Cox PH models, respectively. In addition, the accuracy of prediction rate for ANN and Cox PH methods was equally 92.73%. Conclusions: The present study detected more accurate results for ANN method compared to those of Cox PH model to analyze the survival of patients with liver transplantation. Furthermore, the order of effective factors in patients’ survival after transplantation was clinically more acceptable. The large dataset with a few missing data was the advantage of this study, the fact which makes the results more reliable. PMID:26500682
Hydrology Analysis and Modelling for Klang River Basin Flood Hazard Map
NASA Astrophysics Data System (ADS)
Sidek, L. M.; Rostam, N. E.; Hidayah, B.; Roseli, ZA; Majid, W. H. A. W. A.; Zahari, N. Z.; Salleh, S. H. M.; Ahmad, R. D. R.; Ahmad, M. N.
2016-03-01
Flooding, a common environmental hazard worldwide has in recent times, increased as a result of climate change and urbanization with the effects felt more in developing countries. As a result, the explosive of flooding to Tenaga Nasional Berhad (TNB) substation is increased rapidly due to existing substations are located in flood prone area. By understanding the impact of flood to their substation, TNB has provided the non-structure mitigation with the integration of Flood Hazard Map with their substation. Hydrology analysis is the important part in providing runoff as the input for the hydraulic part.
Wang, Junjie; He, Jiangtao; Chen, Honghan
2012-08-15
Groundwater contamination risk assessment is an effective tool for groundwater management. Most existing risk assessment methods only consider the basic contamination process based upon evaluations of hazards and aquifer vulnerability. In view of groundwater exploitation potentiality, including the value of contamination-threatened groundwater could provide relatively objective and targeted results to aid in decision making. This study describes a groundwater contamination risk assessment method that integrates hazards, intrinsic vulnerability and groundwater value. The hazard harmfulness was evaluated by quantifying contaminant properties and infiltrating contaminant load, the intrinsic aquifer vulnerability was evaluated using a modified DRASTIC model and the groundwater value was evaluated based on groundwater quality and aquifer storage. Two groundwater contamination risk maps were produced by combining the above factors: a basic risk map and a value-weighted risk map. The basic risk map was produced by overlaying the hazard map and the intrinsic vulnerability map. The value-weighted risk map was produced by overlaying the basic risk map and the groundwater value map. Relevant validation was completed by contaminant distributions and site investigation. Using Beijing Plain, China, as an example, thematic maps of the three factors and the two risks were generated. The thematic maps suggested that landfills, gas stations and oil depots, and industrial areas were the most harmful potential contamination sources. The western and northern parts of the plain were the most vulnerable areas and had the highest groundwater value. Additionally, both the basic and value-weighted risk classes in the western and northern parts of the plain were the highest, indicating that these regions should deserve the priority of concern. Thematic maps should be updated regularly because of the dynamic characteristics of hazards. Subjectivity and validation means in assessing the
Characterizing the danger of in-channel river hazards using LIDAR and a 2D hydrodynamic model
NASA Astrophysics Data System (ADS)
Strom, M. A.; Pasternack, G. B.
2014-12-01
Despite many injuries and deaths each year worldwide, no analytically rigorous attempt exists to characterize and quantify the dangers to boaters, swimmers, fishermen, and other river enthusiasts. While designed by expert boaters, the International Scale of River Difficulty provides a whitewater classification that uses qualitative descriptions and subjective scoring. The purpose of this study was to develop an objective characterization of in-channel hazard dangers across spatial scales from a single boulder to an entire river segment for application over a wide range of discharges and use in natural hazard assessment and mitigation, recreational boating safety, and river science. A process-based conceptualization of river hazards was developed, and algorithms were programmed in R to quantify the associated dangers. Danger indicators included the passage proximity and reaction time posed to boats and swimmers in a river by three hazards: emergent rocks, submerged rocks, and hydraulic jumps or holes. The testbed river was a 12.2 km mixed bedrock-alluvial section of the upper South Yuba River between Lake Spaulding and Washington, CA in the Sierra Mountains. The segment has a mean slope of 1.63%, with 8 reaches varying from 1.07% to 3.30% slope and several waterfalls. Data inputs to the hazard analysis included sub-decimeter aerial color imagery, airborne LIDAR of the river corridor, bathymetric data, flow inputs, and a stage-discharge relation for the end of the river segment. A key derived data product was the location and configuration of boulders and boulder clusters as these were potential hazards. Two-dimensional hydrodynamic modeling was used to obtain the meter-scale spatial pattern of depth and velocity at discharges ranging from baseflow to modest flood stages. Results were produced for four discharges and included the meter-scale spatial pattern of the passage proximity and reaction time dangers for each of the three hazards investigated. These results
NASA Astrophysics Data System (ADS)
Kourgialas, N. N.; Karatzas, G. P.
2014-03-01
A modeling system for the estimation of flash flood flow velocity and sediment transport is developed in this study. The system comprises three components: (a) a modeling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modeling is the Manning's coefficient, an indicator of the channel resistance which is directly dependent on riparian vegetation changes. Riparian vegetation's effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed-cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modeling system is used to evaluate and illustrate the flood hazard for different riparian vegetation cutting scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, a well-balanced selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood-prone areas. The proposed methodology was applied to the downstream part of a small Mediterranean river basin in Crete, Greece.
NASA Astrophysics Data System (ADS)
Kourgialas, N. N.; Karatzas, G. P.
2013-10-01
A modelling system for the estimation of flash flood flow characteristics and sediment transport is developed in this study. The system comprises of three components: (a) a modelling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modelling is the Manning's coefficient, an indicator of the channel resistance which is directly depended on riparian vegetation changes. Riparian vegetation effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modelling system is used to evaluate and illustrate the flood hazard for different cutting riparian vegetation scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, an optimal selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood prone areas. The proposed methodology was applied to the downstream part of a small mediterranean river basin in Crete, Greece.
MODELS TO ESTIMATE VOLATILE ORGANIC HAZARDOUS AIR POLLUTANT EMISSIONS FROM MUNICIPAL SEWER SYSTEMS
Emissions from municipal sewers are usually omitted from hazardous air pollutant (HAP) emission inventories. This omission may result from a lack of appreciation for the potential emission impact and/or from inadequate emission estimation procedures. This paper presents an analys...
A Model (Formula) for Deriving A Hazard Index of Rail-Highway Grade Crossings.
ERIC Educational Resources Information Center
Coburn, James Minton
The purpose of this research was to compile data for use as related information in the education of drivers, and to derive a formula for computing a hazard index for rail-highway intersections. Data for the study were compiled from: (1) all crossings on which field data were collected, (2) reports of 642 accidents, and (3) data collected from…
Bejarano, Adriana C; Barron, Mace G
2016-01-01
Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within studies. Target lipid (TLM) ICE models were also developed using target lipid concentrations of the type 2 model dataset (type 2-TLM). Analyses were performed to assess model prediction uncertainty introduced by each approach. Most statistically significant models (90%; 266 models total) had mean square errors < 0.27 and adjusted coefficients of determination (adj R(2) ) > 0.59, with the lowest amount of variation in mean square errors noted for type 2-TLM followed by type 2 models. Cross-validation success (>0.62) across most models (86% of all models) confirmed the agreement between ICE predicted and observed values. Despite differences in model predictive ability, most predicted values across all 3 ICE model types were within a 2-fold difference of the observed values. As a result, no statistically significant differences (p > 0.05) were found between most ICE-based and empirical species sensitivity distributions (SSDs). In most cases hazard concentrations were within or below the 95% confidence intervals of the direct-empirical SSD-based values, regardless of model choice. Interspecies correlation estimation-based 5th percentile (HC5) values showed a 200- to 900-fold increase as the log KOW increased from 2 to 5.3. Results indicate that ICE models for aromatic compounds provide a statistically based approach for deriving conservative hazard estimates for protecting aquatic life. PMID:26184086
NASA Astrophysics Data System (ADS)
Keith, A. M.; Weigel, A. M.; Rivas, J.
2014-12-01
Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.
Drift and proportional tracking chambers
NASA Astrophysics Data System (ADS)
Jaros, J. A.
1980-11-01
The many techniques exploited in constructing tracking chambers, particle detectors which measure the trajectories and momenta of charged particles, are discussed. In high energy interactions, the final states are dominated by closely collimated jets of high multiplicity, requiring good track-pair resolution in the tracking chamber. High energy particles deflect very little in limited magnetic field volumes, necessitating good spatial resolution for accurate momentum measurements. The colliding beam technique requires a device easily adapted to full solid angle coverage, and the high event rates expected in some of these machines put a premium on good time resolution. Finally, the production and subsequent decays of the tau, charmed and beautiful mesons provide multiple vertex topologies. To reconstruct these vertices reliably requires improvements in spatial resolution and track pair resolution. The proportional counter and its descendant, the drift chamber, are considered as tracking chambers. The physics of this device are discussed in order to understand its performance limitations and promises.
Nath, D C; Singh, K K; Land, K C; Talukdar, P K
1993-10-01
The length of the first birth interval is one of the strongest and most persistent factors affecting fertility in noncontracepting populations, with longer intervals usually associated with lower fertility. Compared to Western society, the average length of the first birth interval is much longer in traditional Indian society. Yet Indian fertility rates are higher because of either ineffective family planning procedures or deliberate nonuse of birth control and because of the high proportion of the population that is married. Here, we examine the effects of various sociodemographic covariates (with an emphasis on the role of age at marriage) on the length of the first birth interval for two states of India: Assam and Uttar Pradesh. Life table and multivariate hazards modeling techniques are applied to the data. Covariates such as age at marriage, present age of mother, female's occupation, family income, and place of residence have strong effects on the variation of the length of the first birth interval. For each subgroup of females (classified according to different levels of the covariates), the median length of the first birth interval for the Assam (Bengali-speaking) sample is shorter than that of the Uttar Pradesh (Hindi-speaking) sample. PMID:8262506
Models of magma-aquifer interactions and their implications for hazard assessment
NASA Astrophysics Data System (ADS)
Strehlow, Karen; Gottsmann, Jo; Tumi Gudmundsson, Magnús
2014-05-01
Interactions of magmatic and hydrological systems are manifold, complex and poorly understood. On the one side they bear a significant hazard potential in the form of phreatic explosions or by causing "dry" effusive eruptions to turn into explosive phreatomagmatic events. On the other side, they can equally serve to reduce volcanic risk, as resulting geophysical signals can help to forecast eruptions. It is therefore necessary to put efforts towards answering some outstanding questions regarding magma - aquifer interactions. Our research addresses these problems from two sides. Firstly, aquifers respond to magmatic activity and they can also become agents of unrest themselves. Therefore, monitoring the hydrology can provide a valuable window into subsurface processes in volcanic areas. Changes in temperature and strain conditions, seismic excitation or the injection of magmatic fluids into hydrothermal systems are just a few of the proposed processes induced by magmatic activity that affect the local hydrology. Interpretations of unrest signals as groundwater responses are described for many volcanoes and include changes in water table levels, changes in temperature or composition of hydrothermal waters and pore pressure-induced ground deformation. Volcano observatories can track these hydrological effects for example with potential field investigations or the monitoring of wells. To fully utilise these indicators as monitoring and forecasting tools, however, it is necessary to improve our understanding of the ongoing mechanisms. Our hydrogeophysical study uses finite element analysis to quantitatively test proposed mechanisms of aquifer excitation and the resultant geophysical signals. Secondly, volcanic activity is influenced by the presence of groundwater, including phreatomagmatic and phreatic eruptions. We focus here on phreatic explosions at hydrothermal systems. At least two of these impulsive events occurred in 2013: In August at the Icelandic volcano
Luria, Paolo; Aspinall, Peter A
2003-08-01
The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based on a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)
Johnson, Branden B; Hallman, William K; Cuite, Cara L
2015-03-01
Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. PMID:25516461
NASA Astrophysics Data System (ADS)
Fitzgerald, R. H.; Tsunematsu, K.; Kennedy, B. M.; Breard, E. C. P.; Lube, G.; Wilson, T. M.; Jolly, A. D.; Pawson, J.; Rosenberg, M. D.; Cronin, S. J.
2014-10-01
On 6 August, 2012, Upper Te Maari Crater, Tongariro volcano, New Zealand, erupted for the first time in over one hundred years. Multiple vents were activated during the hydrothermal eruption, ejecting blocks up to 2.3 km and impacting ~ 2.6 km of the Tongariro Alpine Crossing (TAC) hiking track. Ballistic impact craters were mapped to calibrate a 3D ballistic trajectory model for the eruption. This was further used to inform future ballistic hazard. Orthophoto mapping revealed 3587 impact craters with a mean diameter of 2.4 m. However, field mapping of accessible regions indicated an average of at least four times more observable impact craters and a smaller mean crater diameter of 1.2 m. By combining the orthophoto and ground-truthed impact frequency and size distribution data, we estimate that approximately 13,200 ballistic projectiles were generated during the eruption. The 3D ballistic trajectory model and a series of inverse models were used to constrain the eruption directions, angles and velocities. When combined with eruption observations and geophysical observations, the model indicates that the blocks were ejected in five variously directed eruption pulses, in total lasting 19 s. The model successfully reproduced the mapped impact distribution using a mean initial particle velocity of 200 m/s with an accompanying average gas flow velocity over a 400 m radius of 150 m/s. We apply the calibrated model to assess ballistic hazard from the August eruption along the TAC. By taking the field mapped spatial density of impacts and an assumption that an average ballistic impact will cause serious injury or death (casualty) over an 8 m2 area, we estimate that the probability of casualty ranges from 1% to 16% along the affected track (assuming an eruption during the time of exposure). Future ballistic hazard and probabilities of casualty along the TAC are also assessed through application of the calibrated model. We model a magnitude larger eruption and illustrate
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In
Diver, Richard B., Jr.; Ghanbari, Cheryl M.; Ho, Clifford Kuofei
2010-04-01
With growing numbers of concentrating solar power systems being designed and developed, glint and glare from concentrating solar collectors and receivers is receiving increased attention as a potential hazard or distraction for motorists, pilots, and pedestrians. This paper provides analytical methods to evaluate the irradiance originating from specularly and diffusely reflecting sources as a function of distance and characteristics of the source. Sample problems are provided for both specular and diffuse sources, and validation of the models is performed via testing. In addition, a summary of safety metrics is compiled from the literature to evaluate the potential hazards of calculated irradiances from glint and glare. Previous safety metrics have focused on prevention of permanent eye damage (e.g., retinal burn). New metrics used in this paper account for temporary flash blindness, which can occur at irradiance values several orders of magnitude lower than the irradiance values required for irreversible eye damage.
NASA Astrophysics Data System (ADS)
Stancanelli, L. M.; Peres, D. J.; Cavallaro, L.; Cancelliere, A.; Foti, E.
2014-12-01
During the last decades an increase of debris flow catastrophic events has been recorded along the Italian territory, mainly due to the increment of settlements and human activities in mountain areas. Considering the large extent of debris flow prone areas, non structural protection strategies should be preferably implemented because of economic constrains associated with structural mitigation measures. In such a framework hazard assessment methodologies play a key role representing useful tools for the development of emergency management policies. The aim of the present study is to apply an integrated debris flow hazard assessment methodology, where rainfall probabilistic analysis and physically-based landslide triggering and propagation models are combined. In particular, the probabilistic rainfall analysis provides the forcing scenarios of different return periods, which are then used as input to a model based on combination of the USGS TRIGRS and the FLO-2D codes. The TRIGRS model (Baum et al., 2008; 2010), developed for analyzing shallow landslide triggering is based on an analytical solution of linearized forms of the Richards' infiltration equation and an infinite-slope stability calculation to estimate the timing and locations of slope failures, while the FLO-2D (O'Brien 1986) is a two-dimensional finite difference model that simulates debris flow propagation following a mono-phase approach, based on empirical quadratic rheological relation developed by O'Brien and Julien (1985). Various aspects of the combination of the models are analyzed, giving a particular focus on the possible variations of triggered amounts compatible with a given return period. The methodology is applied to the case study area of the Messina Province in Italy, which has been recently struck by severe events, as the one of the 1st October 2009 which hit the Giampilieri Village causing 37 fatalities. Results are analyzed to assess the potential hazard that may affect the densely
NASA Astrophysics Data System (ADS)
Mejía-Navarro, Mario; Wohl, Ellen E.; Oaks, Sherry D.
1994-08-01
Glenwood Springs, Colorado, lies at the junction of the Roaring Fork and Colorado Rivers, surrounded by the steep peaks of the Colorado Rocky Mountains. Large parts of the region have had intensive sheet erosion, debris flows, and hyperconcentrated floods triggered by landslides and slumps. The latter come from unstable slopes in the many tributary channels on the mountainsides, causing concentration of debris in channels and a large accumulation of sediment in colluvial wedges and debris fans that line the river valleys. Many of the landslide and debris-flow deposits exist in a state resembling suspended animation, ready to be destabilized by intense precipitation and/or seismic activity. During this century urban development in the Roaring Fork River valley has increased rapidly. The city of Glenwood Springs continues to expand over unstable debris fans without any construction of hazard mitigation structures. Since 1900, Glenwood Springs has had at least 21 damaging debris flows and floods; on July 24, 1977 a heavy thunderstorm spread a debris flow over more than 80 ha of the city. This paper presents a method that uses Geographic Information Systems (GIS) to assess geological hazards, vulnerability, and risk in the Glenwood Springs area. The hazards evaluated include subsidence, rockfall, debris flows, and floods, and in this paper we focus on debris flows and subsidence. Information on topography, hydrology, precipitation, geomorphic processes, bedrock and surficial geology, structural geology, soils, vegetation, and land use, was processed for hazard assessment using a series of algorithms. ARC/INFO and GRASS GIS softwares were used to produce maps and tables in a format accessible to urban planners. After geological hazards were defined for the study area, we estimated the vulnerability ( Ve) of various elements for an event of intensity i. Risk is assessed as a function of hazard and vulnerability. We categorized the study area in 14 classes for planning
Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J
2008-02-11
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.
Parametric Hazard Function Estimation.
Energy Science and Technology Software Center (ESTSC)
1999-09-13
Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less
The Proportion of Stars with Planets
NASA Astrophysics Data System (ADS)
Woolfson, M. M.
2016-04-01
Estimates of the proportion of Sun-like stars with accompanying planets vary widely; the best present estimate is that it is about 0.34. The capture theory of planet formation involves an interaction between a condensed star and either a diffuse protostar or a high-density region in a dense embedded cluster. The protostar, or dense region, is tidally stretched into a filament that is gravitationally unstable and breaks up into a string of protoplanetary blobs, which subsequently collapse to form planets, some of which are captured by the star. A computational model, in which the passage of collapsing protostars, with initial radii 1000, 1500 and 2000 au, through a dense embedded cluster are followed, is used to estimate the proportion of protostars that would be disrupted to give planets, in environments with star number-densities in the range 5000-25,000 pc-3. It is concluded from the results that the capture theory might explain the presently-estimated proportion of stars with exoplanet companions, although other possible ways of producing exoplanets are not excluded.
Crowder, Kyle; Downey, Liam
2009-01-01
This study combines data from the Panel Study of Income Dynamics with neighborhood-level industrial hazard data from the Environmental Protection Agency to examine the extent and sources of environmental inequality at the individual level. Results indicate that profound racial and ethnic differences in proximity to industrial pollution persist when differences in individual education, household income, and other micro-level characteristics are controlled. Examination of underlying migration patterns further reveals that black and Latino householders move into neighborhoods with significantly higher hazard levels than do comparable whites, and that racial differences in proximity to neighborhood pollution are maintained more by these disparate mobility destinations than by differential effects of pollution on the decision to move. PMID:20503918
NASA Astrophysics Data System (ADS)
Mohammed, F.; Li, S.; Jalali Farahani, R.; Williams, C. R.; Astill, S.; Wilson, P. S.; B, S.; Lee, R.
2014-12-01
The past decade has been witness to two mega-tsunami events, 2004 Indian ocean tsunami and 2011 Japan tsunami and multiple major tsunami events; 2006 Java, Kuril Islands, 2007 Solomon Islands, 2009 Samoa and 2010 Chile, to name a few. These events generated both local and far field tsunami inundations with runup ranging from a few meters to around 40 m in the coastal impact regions. With a majority of the coastal population at risk, there is need for a sophisticated outlook towards catastrophe risk estimation and a quick mitigation response. At the same time tools and information are needed to aid advanced tsunami hazard prediction. There is an increased need for insurers, reinsurers and Federal hazard management agencies to quantify coastal inundations and vulnerability of coastal habitat to tsunami inundations. A novel tool is developed to model local and far-field tsunami generation, propagation and inundation to estimate tsunami hazards. The tool is a combination of the NOAA MOST propagation database and an efficient and fast GPU (Graphical Processing Unit)-based non-linear shallow water wave model solver. The tsunamigenic seismic sources are mapped on to the NOAA unit source distribution along subduction zones in the ocean basin. Slip models are defined for tsunamigenic seismic sources through a slip distribution on the unit sources while maintaining limits of fault areas. A GPU based finite volume solver is used to simulate non-linear shallow water wave propagation, inundation and runup. Deformation on the unit sources provide initial conditions for modeling local impacts, while the wave history from propagation database provides boundary conditions for far field impacts. The modeling suite provides good agreement with basins for basin wide tsunami propagation to validate local and far field tsunami inundations.
Challenging the principle of proportionality.
Andersson, Anna-Karin Margareta
2016-04-01
The first objective of this article is to examine one aspect of the principle of proportionality (PP) as advanced by Alan Gewirth in his 1978 bookReason and Morality Gewirth claims that being capable of exercising agency to some minimal degree is a property that justifies having at least prima facie rights not to get killed. However, according to the PP, before the being possesses the capacity for exercising agency to that minimal degree, the extent of her rights depends on to what extent she approaches possession of agential capacities. One interpretation of PP holds that variations in degree of possession of the physical constitution necessary to exercise agency are morally relevant. The other interpretation holds that only variations in degree of actual mental capacity are morally relevant. The first of these interpretations is vastly more problematic than the other. The second objective is to argue that according to the most plausible interpretation of the PP, the fetus' level of development before at least the 20th week of pregnancy does not affect the fetus' moral rights status. I then suggest that my argument is not restricted to such fetuses, although extending my argument to more developed fetuses requires caution. PMID:26839114
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Hazard function theory for nonstationary natural hazards
NASA Astrophysics Data System (ADS)
Read, L. K.; Vogel, R. M.
2015-11-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-11
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.« less
NASA Astrophysics Data System (ADS)
Gusyev, M. A.; Kwak, Y.; Khairul, M. I.; Arifuzzaman, M. B.; Magome, J.; Sawano, H.; Takeuchi, K.
2015-06-01
This study introduces a flood hazard assessment part of the global flood risk assessment (Part 2) conducted with a distributed hydrological Block-wise TOP (BTOP) model and a GIS-based Flood Inundation Depth (FID) model. In this study, the 20 km grid BTOP model was developed with globally available data on and applied for the Ganges, Brahmaputra and Meghna (GBM) river basin. The BTOP model was calibrated with observed river discharges in Bangladesh and was applied for climate change impact assessment to produce flood discharges at each BTOP cell under present and future climates. For Bangladesh, the cumulative flood inundation maps were produced using the FID model with the BTOP simulated flood discharges and allowed us to consider levee effectiveness for reduction of flood inundation. For the climate change impacts, the flood hazard increased both in flood discharge and inundation area for the 50- and 100-year floods. From these preliminary results, the proposed methodology can partly overcome the limitation of the data unavailability and produces flood~maps that can be used for the nationwide flood risk assessment, which is presented in Part 2 of this study.
NASA Astrophysics Data System (ADS)
Spain, Christopher J.; Anderson, Derek T.; Keller, James M.; Popescu, Mihail; Stone, Kevin E.
2011-06-01
Burying objects below the ground can potentially alter their thermal properties. Moreover, there is often soil disturbance associated with recently buried objects. An intensity video frame image generated by an infrared camera in the medium and long wavelengths often locally varies in the presence of buried explosive hazards. Our approach to automatically detecting these anomalies is to estimate a background model of the image sequence. Pixel values that do not conform to the background model may represent local changes in thermal or soil signature caused by buried objects. Herein, we present a Gaussian mixture model-based technique to estimate the statistical model of background pixel values. The background model is used to detect anomalous pixel values on the road while a vehicle is moving. Foreground pixel confidence values are projected into the UTM coordinate system and a UTM confidence map is built. Different operating levels are explored and the connected component algorithm is then used to extract islands that are subjected to size, shape and orientation filters. We are currently using this approach as a feature in a larger multi-algorithm fusion system. However, in this article we also present results for using this algorithm as a stand-alone detector algorithm in order to further explore its value in detecting buried explosive hazards.
NASA Astrophysics Data System (ADS)
Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique
2016-04-01
Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling
Yang, Xiaobao; Huan, Mei; Abdel-Aty, Mohamed; Peng, Yichuan; Gao, Ziyou
2015-01-01
This paper presents a hazard-based duration approach to investigate riders' waiting times, violation hazards, associated risk factors, and their differences between cyclists and electric bike riders at signalized intersections. A total of 2322 two-wheeled riders approaching the intersections during red light periods were observed in Beijing, China. The data were classified into censored and uncensored data to distinguish between safe crossing and red-light running behavior. The results indicated that the red-light crossing behavior of most riders was dependent on waiting time. They were inclined to terminate waiting behavior and run against the traffic light with the increase of waiting duration. Over half of the observed riders cannot endure 49s or longer. 25% of the riders can endure 97s or longer. Rider type, gender, waiting position, conformity tendency and crossing traffic volume were identified to have significant effects on riders' waiting times and violation hazards. Electric bike riders were found to be more sensitive to the external risk factors such as other riders' crossing behavior and crossing traffic volume than cyclists. Moreover, unobserved heterogeneity was examined in the proposed models. The finding of this paper can explain when and why cyclists and electric bike riders run against the red light at intersections. The results of this paper are useful for traffic design and management agencies to implement strategies to enhance the safety of riders. PMID:25463942
NASA Astrophysics Data System (ADS)
Bryant, Edward
2005-02-01
This updated new edition presents a comprehensive, inter-disciplinary analysis of the complete range of natural hazards. Edward Bryant describes and explains how hazards occur, examines prediction methods, considers recent and historical hazard events and explores the social impact of such disasters. Supported by over 180 maps, diagrams and photographs, this standard text is an invaluable guide for students and professionals in the field. First Edition Hb (1991): 0-521-37295-X First Edition Pb (1991): 0-521-37889-3
Inquiry pedagogy to promote emerging proportional reasoning in primary students
NASA Astrophysics Data System (ADS)
Fielding-Wells, Jill; Dole, Shelley; Makar, Katie
2014-03-01
Proportional reasoning as the capacity to compare situations in relative (multiplicative) rather than absolute (additive) terms is an important outcome of primary school mathematics. Research suggests that students tend to see comparative situations in additive rather than multiplicative terms and this thinking can influence their capacity for proportional reasoning in later years. In this paper, excerpts from a classroom case study of a fourth-grade classroom (students aged 9) are presented as they address an inquiry problem that required proportional reasoning. As the inquiry unfolded, students' additive strategies were progressively seen to shift to proportional thinking to enable them to answer the question that guided their inquiry. In wrestling with the challenges they encountered, their emerging proportional reasoning was supported by the inquiry model used to provide a structure, a classroom culture of inquiry and argumentation, and the proportionality embedded in the problem context.
NASA Astrophysics Data System (ADS)
Zahran, Hani M.; Sokolov, Vladimir; Roobol, M. John; Stewart, Ian C. F.; El-Hadidy Youssef, Salah; El-Hadidy, Mahmoud
2016-07-01
A new seismic source model has been developed for the western part of the Arabian Peninsula, which has experienced considerable earthquake activity in the historical past and in recent times. The data used for the model include an up-to-date seismic catalog, results of recent studies of Cenozoic faulting in the area, aeromagnetic anomaly and gravity maps, geological maps, and miscellaneous information on volcanic activity. The model includes 18 zones ranging along the Red Sea and the Arabian Peninsula from the Gulf of Aqaba and the Dead Sea in the north to the Gulf of Aden in the south. The seismic source model developed in this study may be considered as one of the basic branches in a logic tree approach for seismic hazard assessment in Saudi Arabia and adjacent territories.
NASA Astrophysics Data System (ADS)
Zahran, Hani M.; Sokolov, Vladimir; Roobol, M. John; Stewart, Ian C. F.; El-Hadidy Youssef, Salah; El-Hadidy, Mahmoud
2016-01-01
A new seismic source model has been developed for the western part of the Arabian Peninsula, which has experienced considerable earthquake activity in the historical past and in recent times. The data used for the model include an up-to-date seismic catalog, results of recent studies of Cenozoic faulting in the area, aeromagnetic anomaly and gravity maps, geological maps, and miscellaneous information on volcanic activity. The model includes 18 zones ranging along the Red Sea and the Arabian Peninsula from the Gulf of Aqaba and the Dead Sea in the north to the Gulf of Aden in the south. The seismic source model developed in this study may be considered as one of the basic branches in a logic tree approach for seismic hazard assessment in Saudi Arabia and adjacent territories.
NASA Astrophysics Data System (ADS)
Blahut, J.; Balek, J.; Juras, R.; Klimes, J.; Klose, Z.; Roubinek, J.; Pavlasek, J.
2014-12-01
Snow-avalanche modeling and hazard level assessment are important issues to be solved within mountain regions worldwide. In Czechia, there are two mountain ranges (Krkonoše and Jeseníky Mountains), which suffer from regular avalanche activity every year. Mountain Rescue Service is responsible for issuing avalanche bulletins. However, its approaches are still lacking objective assessments and procedures for hazard level estimations. This lack is mainly caused by missing expert avalanche information system. This paper presents preliminary results from a project funded by the Ministry of Interior of the Czech Republic. This project is focused on development of an information system for snow-avalanche hazard level forecasting. It is composed of three main modules, which should act as a Decision Support System (DSS) for the Mountain Rescue Service. Firstly, snow-avalanche susceptibility model is used for delimiting areas where avalanches can occur based on accurate statistical analyses. For that purpose a waste database is used, containing more than 1100 avalanche events from 1961/62 till present. Secondly, a physical modeling of the avalanches is being performed on avalanche paths using RAMMS modeling code. Regular paths, where avalanches occur every year, and irregular paths are being assessed. Their footprint is being updated using return period information for each path. Thirdly, snow distribution and stability models (distributed HBV-ETH, Snowtran 3D, Snowpack and Alpine 3D) are used to assess the critical conditions for avalanche release. For calibration of the models data about meteo/snow cover data and snowpits is used. Those three parts are being coupled in a WebGIS platform used as the principal component of the DSS in snow-avalanche hazard level assessment.
Of Modeling the Radiation Hazards Along Trajectory Space Vehicles Various Purpose
NASA Astrophysics Data System (ADS)
Grichshenko, Valentina
2016-07-01
The paper discusses the results of the simulation of radiation hazard along trajectory low-orbit spacecraft for various purposes, geostationary and navigation satellites. Developed criteria of reliability of memory cells in Space, including influence of cosmic rays (CR), differences of geophysical and geomagnetic situation on SV orbit are discussed. Numerical value of vertical geomagnetic stiffness, of CR flux and assessment of correlation failures of memory cells along low-orbit spacecrafts trajectory are presented. Obtained results are used to forecasting the radiation situation along SV orbit, reliability of memory cells in the Space and to optimize nominal equipment kit and payload of Kazakhstan SV.
NASA Astrophysics Data System (ADS)
Bambara, G.; Peyras, L.; Felix, H.; Serre, D.
2015-03-01
The experience feedback on a crisis that hit a city is frequently used as a "recollection" tool. To capitalize information about an experience feedback from the cities that have been affected by a natural hazard, the authors propose in this study a functional model to model scenarios of city crises. In this model, the city, considered as a complex system, was modelled using a functional analysis method. Based on such modelling, two risk analysis methods (Failure Mode and Effect Analysis and Event Tree Method) were deployed and adjusted. Lastly, a qualitative reasoning model was used for the scenario modelling of the urban crisis. By functional modelling performed on components of the cities, the objective of this model is to replicate the behaviour of a city affected by a crisis, highlighting the sequences of failure and non-failure modes that have operated during the crisis. This model constitutes a means of understanding the functional behaviour in a crisis of cities and capitalization of the experience feedback of the cities affected by crisis. Such functional modelling was deployed in a case study.
... people how to work with hazardous materials and waste. There are many different kinds of hazardous materials, including: Chemicals, like some that are used for cleaning Drugs, like chemotherapy to treat cancer Radioactive material that is used for x-rays or ...
Friedel, M.J.
2011-01-01
Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios. ?? 2011.
Plesko, Catherine S; Clement, R Ryan; Weaver, Robert P; Bradley, Paul A; Huebner, Walter F
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
NASA Astrophysics Data System (ADS)
Enzenhoefer, R.; Binning, P. J.; Nowak, W.
2015-09-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.
NASA Technical Reports Server (NTRS)
Croom, D. R.; Dunham, R. E., Jr.
1975-01-01
The effectiveness of a forward-located spoiler, a spline, and span load alteration due to a flap configuration change as trailing-vortex-hazard alleviation methods was investigated. For the transport aircraft model in the normal approach configuration, the results indicate that either a forward-located spoiler or a spline is effective in reducing the trailing-vortex hazard. The results also indicate that large changes in span loading, due to retraction of the outboard flap, may be an effective method of reducing the trailing-vortex hazard.
CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.
McKone, T.E.
1994-01-01
Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.
2014-01-01
The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.
NASA Astrophysics Data System (ADS)
Hébert, H.; Schindelé, F.; Heinrich, P.; Piatanesi, A.; Okal, E. A.
In French Polynesia, the Marquesas Islands are particularly prone to amplification of tsunamis generated at the Pacific Rim, due to relatively mild submarine slopes and to large open bays not protected by any coral reef. These islands are also threatened by local tsunamis, as shown by the recent 1999 event on Fatu Hiva. On September 13, 1999, Omoa Bay was struck by 2 to 5 m high water waves: several buildings, among them the school, were flooded and destroyed but no lives were lost. Observations gath- ered during a post-event survey revealed the recent collapse into the sea of a 300x300 m, at least 20-m thick, cliff located 5 km southeast of Omoa. This cliff failure most certainly triggered the tsunami waves since the cliff was reported intact 45 min earlier. We simulate the tsunami generation due to a subaerial landslide, using a finite- difference model assimilating the landslide to a flow of granular material. Numerical modeling shows that a 0.0024-km3 landslide located in the presumed source area ac- counts well for the tsunami waves reported in Omoa Bay. We show that the striking amplification observed in Omoa Bay is related to the trapping of waves due to the shallow submarine shelf surrounding the island. These results stress the local tsunami hazard that should be taken into account in the natural hazard assessment and mitiga- tion of the area, where historical cliff collapses can be observed and should happen again.
Atmospheric electrical modeling in support of the NASA F106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, J. H.
1986-01-01
With the use of composite (non-metallic) and microelectronics becoming more prevalent in the construction of both military and commercial aircraft, the control systems have become more susceptible to damage or failure from electromagnetic transients. One source of such transients is the lightning discharge. In order to study the effects of the lightning discharge on the vital components of an aircraft, NASA Langley Research Center has undertaken a Storm Hazards Program in which a specially instrumented F106B jet aircraft is flown into active thunderstorms with the intention of being struck by lightning. One of the specific purposes of the program is to quantify the environmental conditions which are conductive to aircraft lightning strikes.
NASA Astrophysics Data System (ADS)
Rotondi, R.; Varini, E.
2006-09-01
We consider point processes defined on the space-time domain which model physical processes characterized qualitatively by the gradual increase over time in some energy until a threshold is reached, after which, an event causing the loss of energy occurs. The risk function will, therefore, increase piecewise with sudden drops in correspondence to each event. This kind of behaviour is described by Reid's theory of elastic rebound in the earthquake generating process where the quantity that is accumulated is the strain energy or stress due to the relative movement of tectonic plates. The complexity and the intrinsic randomness of the phenomenon call for probabilistic models; in particular the stochastic translation of Reid's theory is given by stress release models. In this article we use such models to assess the time-dependent seismic hazard of the seismogenic zone of the Corinthos Gulf. For each event we consider the occurrence time and the magnitude, which is modelled by a probability distribution depending on the stress level present in the region at any instant. Hence we are dealing here with a marked point process. We perform the Bayesian analysis of this model by applying the stochastic simulation methods based on the generation of Markov chains, the so called Markov chain Monte Carlo (MCMC) methods, which allow one to reconcile the model's complexity with the computational burden of the inferential procedure. Stress release and Poisson models are compared on the basis of the Bayes factor.
NASA Astrophysics Data System (ADS)
Miller, Craig A.; Williams-Jones, Glyn
2016-06-01
A new 3D geophysical model of the Mt Tongariro Volcanic Massif (TgVM), New Zealand, provides a high resolution view of the volcano's internal structure and hydrothermal system, from which we derive implications for volcanic hazards. Geologically constrained 3D inversions of potential field data provides a greater level of insight into the volcanic structure than is possible from unconstrained models. A complex region of gravity highs and lows (± 6 mGal) is set within a broader, ~ 20 mGal gravity low. A magnetic high (1300 nT) is associated with Mt Ngauruhoe, while a substantial, thick, demagnetised area occurs to the north, coincident with a gravity low and interpreted as representing the hydrothermal system. The hydrothermal system is constrained to the west by major faults, interpreted as an impermeable barrier to fluid migration and extends to basement depth. These faults are considered low probability areas for future eruption sites, as there is little to indicate they have acted as magmatic pathways. Where the hydrothermal system coincides with steep topographic slopes, an increased likelihood of landslides is present and the newly delineated hydrothermal system maps the area most likely to have phreatic eruptions. Such eruptions, while small on a global scale, are important hazards at the TgVM as it is a popular hiking area with hundreds of visitors per day in close proximity to eruption sites. The model shows that the volume of volcanic material erupted over the lifespan of the TgVM is five to six times greater than previous estimates, suggesting a higher rate of magma supply, in line with global rates of andesite production. We suggest that our model of physical property distribution can be used to provide constraints for other models of dynamic geophysical processes occurring at the TgVM.
Kalman-predictive-proportional-integral-derivative (KPPID)
Fluerasu, A.; Sutton, M.
2004-12-17
With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
... should be in a room with good airflow Work Safely If you find a spill, treat it like ... Hazard communication; Material Safety Data Sheet; MSDS References Occupational Safety and Health Administration. Healthcare. Available at: www.osha. ...
ERIC Educational Resources Information Center
Vandas, Steve
1998-01-01
Focuses on hurricanes and tsunamis and uses these topics to address other parts of the science curriculum. In addition to a discussion on beach erosion, a poster is provided that depicts these natural hazards that threaten coastlines. (DDR)
... wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, ... drain, flush them, or put them in the garbage. See if you can donate or recycle. Many ...
... and female reproductive systems play a role in pregnancy. Problems with these systems can affect fertility and ... a reproductive hazard can cause different effects during pregnancy, depending on when she is exposed. During the ...
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
NASA Astrophysics Data System (ADS)
Selva, J.
2012-12-01
Multi-risk approaches have been recently proposed to assess and compare different risks in the same target area. The key point of multi-risk assessments are the development of homogeneous risk definitions and the treatment of risk interaction. The lack of treatment of interaction may lead to significant biases and thus to erroneous risk hierarchization, which is one of primary output of risk assessments for decision makers. Within the framework of the Italian project "ByMuR - Bayesian Multi-Risk assessment", a formal model (ByMuR model) to assess multi-risk for a target area is under development, aiming (i) to perform multi-risk analyses treating interaction between different hazardous phenomena, accounting for possible effects of interaction at hazard, vulnerability and exposure levels, and (ii) to explicitly account for all uncertainties (aleatory and epistemic) through a Bayesian approach, allowing a meaningful comparison among different risks. The model is meant to be general, but it is targeted to the assessment of volcanic, seismic and tsunami risks for the city of Naples (Italy). Here, it is presented the preliminary development of the ByMuR model. The applicability of the methodology is demonstrated through illustrative examples, in which the effects of uncertainties and the bias in single-risk estimation induced by the assumption of independence among risks are explicitly assessed. An extensive application of this methodology at regional and sub-regional scale would allow to identify where a given interaction has significant effects in long-term risk assessments, and thus when multi-risk analyses should be considered in order to provide unbiased risk estimations.
Estimation of lithofacies proportions using well and well test data
Hu, L.Y.; Blanc, G.; Noetinger, B.
1996-12-31
A crucial step of the commonly used geostatistical methods for modeling heterogeneous reservoirs (e.g. the sequential indicator simulation and the truncated Gaussian functions) is the estimation of the lithofacies local proportion (or probability density) functions. Well-test derived permeabilities show good correlation with lithofacies proportions around wells. Integrating well and well-test data in estimating lithofacies proportions could permit the building of more realistic models of reservoir heterogeneity. However this integration is difficult because of the different natures and measurement scales of these two types of data. This paper presents a two step approach to integrating well and well-test data into heterogeneous reservoir modeling. First lithofacies proportions in well-test investigation areas are estimated using a new kriging algorithm called KISCA. KISCA consists in kriging jointly the proportions of all lithofacies in a well-test investigation area so that the corresponding well-test derived permeability is respected through a weighted power averaging of lithofacies permeabilities. For multiple well-tests, an iterative process is used in KISCA to account for their interaction. After this, the estimated proportions are combined with lithofacies indicators at wells for estimating proportion (or probability density) functions over the entire reservoir field using a classical kriging method. Some numerical examples were considered to test the proposed method for estimating lithofacies proportions. In addition, a synthetic lithofacies reservoir model was generated and a well-test simulation was performed. The comparison between the experimental and estimated proportions in the well-test investigation area demonstrates the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Matiella Novak, M. Alexandra
Volcanic ash clouds in the upper atmosphere (>10km) present a significant hazard to the aviation community and in some cases cause near-disastrous situations for aircraft that inadvertently encounter them. The two most commonly used techniques for mitigating hazards to aircraft from drifting volcanic clouds are (1) using data from satellite observations and (2) the forecasting of dispersion and trajectories with numerical models. This dissertation aims to aid in the mitigation of this hazard by using Moderate Infrared Resolution Spectroradiometer (MODIS) and Advanced Very High Resolution Radiometer (AVHRR) infrared (IR) satellite data to quantitatively analyze and constrain the uncertainties in the PUFF volcanic ash transport model. Furthermore, this dissertation has experimented with the viability of combining IR data with the PUFF model to increase the model's reliability. Comparing IR satellite data with forward transport models provides valuable information concerning the uncertainty and sensitivity of the transport models. A study analyzing the viability of combining satellite-based information with the PUFF model was also done. Factors controlling the cloud-shape evolution, such as the horizontal dispersion coefficient, vertical distribution of particles, the height of the cloud, and the location of the cloud were all updated based on observations from satellite data in an attempt to increase the reliability of the simulations. Comparing center of mass locations--calculated from satellite data--to HYSPLIT trajectory simulations provides insight into the vertical distribution of the cloud. A case study of the May 10, 2003 Anatahan Volcano eruption was undertaken to assess methods of calculating errors in PUFF simulations with respect to the transport and dispersion of the erupted cloud. An analysis of the factors controlling the cloud-shape evolution of the cloud in the model was also completed and compared to the shape evolution of the cloud observed in the
NASA Astrophysics Data System (ADS)
Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore
2016-04-01
Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using
NASA Astrophysics Data System (ADS)
Bebbington, Mark S.; Cronin, Shane J.
2011-01-01
The Auckland Volcanic Field (AVF) with 49 eruptive centres in the last c. 250 ka presents many challenges to our understanding of distributed volcanic field construction and evolution. We re-examine the age constraints within the AVF and perform a correlation exercise matching the well-dated record of tephras from cores distributed throughout the field to the most likely source volcanoes, using thickness and location information and a simple attenuation model. Combining this augmented age information with known stratigraphic constraints, we produce a new age-order algorithm for the field, with errors incorporated using a Monte Carlo procedure. Analysis of the new age model discounts earlier appreciations of spatio-temporal clustering in the AVF. Instead the spatial and temporal aspects appear independent; hence the location of the last eruption provides no information about the next location. The temporal hazard intensity in the field has been highly variable, with over 63% of its centres formed in a high-intensity period between 40 and 20 ka. Another, smaller, high-intensity period may have occurred at the field onset, while the latest event, at 504 ± 5 years B.P., erupted 50% of the entire field's volume. This emphasises the lack of steady-state behaviour that characterises the AVF, which may also be the case in longer-lived fields with a lower dating resolution. Spatial hazard intensity in the AVF under the new age model shows a strong NE-SW structural control of volcanism that may reflect deep-seated crustal or subduction zone processes and matches the orientation of the Taupo Volcanic Zone to the south.
Yuhas, J.A.; Taylor, R.K.; Dutcher, D.D.
1996-12-31
Section 112(r) of the Clean Air Act calls for the promulgation of new rules to prevent and minimize the consequences of accidental releases of chemicals. The rules will require the development of Risk Management Plans (RMP`s) and ambient air consequence analyses of potential releases. A series of dense gas dispersion, puff release, and accidental release models are being introduced to meet the demands of the new regulatory requirements for various release scenarios. Studies to data have shown that no single model out performs all others when tested against field experiment data. Also, little has been done to assess the applicability of these models to actual modeling scenarios and to the Section 112(r) modeling requirements. This paper assesses the applicability of current guideline models to the Section 112(r) requirements and points out areas where the guideline models cannot meet these requirements. Additional models are presented as potential solutions to this problem.
NASA Astrophysics Data System (ADS)
Tappin, David R.
2015-04-01
the resolution necessary to identify the hazard from landslides, particularly along convergent margins where this hazard is the greatest. Multibeam mapping of the deep seabed requires low frequency sound sources that, because of their corresponding low resolution, cannot produce the detail required to identify the finest scale features. In addition, outside of most countries, there are not the repeat surveys that allow seabed changes to be identified. Perhaps only japan has this data. In the near future as research budgets shrink and ship time becomes ever expensive new strategies will have to be used to make best use of the vessels available. Remote AUV technology is almost certainly the answer, and should be increasingly utilised to map the seabed while the mother ship is better used to carry out other duties, such as sampling or seismic data acquisition. This will have the advantage in the deep ocean of acquiring higher resolution data from high frequency multibeams. This talk presents on a number of projects that show the evolution of the use of MBES in mapping submarine landslides since the PNG tsunami. Data from PNG is presented, together with data from Japan, Hawaii and the NE Atlantic. New multibeam acquisition methodologies are also discussed.
Fthenakis, V.M.; Blewitt, D.N.; Hague, W.J.
1995-05-01
OSHA Process Safety Management guidelines suggest that a facility operator investigate and document a plan for installing systems to detect, contain, or mitigate accidental releases if such systems are not already in place. In addition, proposed EPA 112(r) regulations would require such analysis. This paper illustrates how mathematical modelling can aid such an evaluation and describes some recent enhancements of the HGSPRAY model: (1) Adding algorithms for modeling NH{sub 3} and LNG mitigation; (2) Modeling spraying of releases with fire water monitors encircling the point of release; (3) Combining wind tunnel modeling with mathematical modeling; and (4) Linking HGSPRAY and BEGADAS. Case cases are presented as examples of how HGSPRAY can aid the design of water spray systems for initiation of toxic gases (e.g., BF, NH,) or dilution/dispersion of flammable vapors (e.g., LNG).
A plane source model for seismic hazard analysis using geometrical source parameters
Suen, S.J.
1988-01-01
A plane source model for seismic risk analysis consistent with existing theories of earthquake mechanism and characteristics is developed. The model would consider earthquakes occurred because of a rupture plane developed and extended along geologic faults. Three types of idealized source models are used for modeling all conceivable seismic sources. The sensitivity of the seismic risk to several influencing factors is studied. The generalized renewal process is introduced for modeling the future occurrence of earthquakes, which incorporates the nonstationarity of the earthquake occurrence and provides information in terms of a conditional probability on the basis of the time of the previous earthquakes. A site in Downtown San Francisco is analyzed in detail to demonstrate the applicability of the model developed. The risk-based isoseismal contours corresponding to a specified annual exceedance probability is discussed, and a case study using Taiwan earthquake data is also demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Z.; Li, L.; Liu, G.; Jiang, C.; Ma, H.
2010-12-01
To the southwest of the WFSD-I and WFSD-II is the southern part of the Longmenshan fault, which has been keeping quiet since the May 12, 2008, Wenchuan earthquake which ruptured the middle and the northern part of the Longmenshan fault zone. The seismic hazard in this reason is one of the concerns not only in the WFSD project but also in the regional sustainability. This presentation tries to discuss the following three major problems related to the seismic hazard of this fault segment: 1) If there were a major earthquake rupturing this fault segment, what would be the ‘scenario rupture’ preparing and occurring; 2) Based on this concept of ‘scenario rupture’, how to design the ‘monitoring and modeling for prediction’ system in this region, for the effective constraint of geodynamic models for earthquake preparation, the effective monitoring of potentially pre-seismic changes of geophysical fields, and the effective test of the predictive models and/or algorithms; and 3) what will be the potential contribution of the WFSD project, in both long-term sense and short-term sense, to the monitoring and modeling of seismic hazard in this region. In considering these three questions, lessons and experiences from the Wenchuan earthquake plays an important role, and the relation between the Xianshuihe fault and the Longmenshan fault is one of the critical issues subject to consideration. Considering the state-of-the-art of earthquake science and social needs, the monitoring and modeling endeavor should be dealing with different time scales considering both scientific issues and decision-making issues. Taking the lessons and experiences of the previously-conducted earthquake prediction experiment sites, we propose a concept ‘seismological engineering’ (which is different from either ‘earthquake engineering’ or ‘engineering seismology’) dealing with the design of the operational multi-disciplinary observation system oriented at the monitoring and
Quantifying the Average of the Time-varying Hazard Ratio via a Class of Transformations
CHEN, QINGXIA; ZENG, DONGLIN; IBRAHIM, JOSEPH G.; CHEN, MING-HUI; PAN, ZHIYING; XUE, XIAODONG
2014-01-01
The hazard ratio derived from the Cox model is a commonly used summary statistic to quantify a treatment effect with a time-to-event outcome. The proportional hazards assumption of the Cox model, however, is frequently violated in practice and many alternative models have been proposed in the statistical literature. Unfortunately, the regression coefficients obtained from different models are often not directly comparable. To overcome this problem, we propose a family of weighted hazard ratio measures that are based on the marginal survival curves or marginal hazard functions, and can be estimated using readily available output from various modeling approaches. The proposed transformation family includes the transformations considered by [18] as special cases. In addition, we propose a novel estimate of the weighted hazard ratio based on the maximum departure from the null hypothesis within the transformation family, and develop a Kolmogorov–Smirnov type of test statistic based on this estimate. Simulation studies show that when the hazard functions of two groups either converge or diverge, this new estimate yields a more powerful test than tests based on the individual transformations recommended in [18], with a similar magnitude of power loss when the hazards cross. The proposed estimates and test statistics are applied to a colorectal cancer clinical trial. PMID:25073864
NASA Astrophysics Data System (ADS)
Meletti, C.
2013-05-01
In 2003, a large national project fur updating the seismic hazard map and the seismic zoning in Italy started, according to the rules fixed by an Ordinance by Italian Prime Minister. New input elements for probabilistic seismic hazard assessment were compiled: the earthquake catalogue, the seismogenic zonation, the catalogue completeness, a set of new attenuation relationships. The map of expected PGA on rock soil condition with 10% probability of exceedance is the new reference seismic hazard map for Italy (http://zonesismiche.mi.ingv.it). In the following, further 9 probabilities of exceedance and the uniform hazard spectra up to 2 seconds together with the disaggregation of the PGA was also released. A comprehensive seismic hazard model that fully describes the seismic hazard in Italy was then available, accessible by a webGis application (http://esse1-gis.mi.ingv.it/en.php). The detailed information make possible to change the approach for evaluating the proper seismic action for designing: from a zone-dependent approach (in Italy there were 4 seismic zones, each one with a single design spectrum) to a site-dependent approach: the design spectrum is now defined at each site of a grid of about 11000 points covering the whole national territory. The new building code becomes mandatory only after the 6 April 2009 L'Aquila earthquake, the first strong event in Italy after the release of the seismic hazard map. The large number of recordings and the values of the experienced accelerations suggested the comparisons between the recorded spectra and spectra defined in the seismic codes Even if such comparisons could be robust only after several consecutive 50-year periods of observation and in a probabilistic approach it is not a single observation that can validate or not the hazard estimate, some of the comparisons that can be undertaken between the observed ground motions and the hazard model used for the seismic code have been performed and have shown that the
Thermal stress modeling of in situ vitrified barriers for hazardous waste containment
Garnich, M.R.
1991-09-01
Development of In Situ Vitrification technology has included the concept of subsurface barriers. Structural integrity of vitrified soil bodies is important to barrier performance. Analytical methods are under development for predicting thermal-structural performance during melt cooldown. A thermal modeling capability has been developed for predicting the cooling transient of subsurface molten masses using the finite element method. A computationally efficient instant freezing model'' was demonstrated to give qualitative agreement for predicted stresses with a more sophisticated creep model. A method for predicting stress relief due to cracking, as a preliminary step to predicting crack densities, has been demonstrated. 8 refs., 10 figs.
AIR QUALITY MODELING OF HAZARDOUS POLLUTANTS: CURRENT STATUS AND FUTURE DIRECTIONS
The paper presents a review of current air toxics modeling applications and discusses possible advanced approaches. Many applications require the ability to predict hot spots from industrial sources or large roadways that are needed for community health and Environmental Justice...
Models Show Subsurface Cracking May Complicate Groundwater Cleanup at Hazardous Waste Sites
Chlorinated solvents like trichloroethylene contaminate groundwater at numerous sites nationwide. This modeling study, conducted at the Air Force Institute of Technology, shows that subsurface cracks, either natural or due to the presence of the contaminant itself, may result in...
Two state model for a constant disease hazard in paratuberculosis (and other bovine diseases).
Louzoun, Yoram; Mitchell, Rebecca; Behar, Hilla; Schukken, Ynte
2015-01-01
Many diseases are characterized by a long and varying sub-clinical period. Two main mechanisms can explain such periods: a slow progress toward disease or a sudden transition from a healthy state to a disease state induced by internal or external events. We here survey epidemiological features of the amount of bacteria shed during Mycobacterium Avium Paratuberculosis (MAP) infection to test which of these two models, slow progression or sudden transition (or a combination of the two), better explains the transition from intermittent and low shedding to high shedding. Often, but not always, high shedding is associated with the occurrence of clinical signs. In the case of MAP, the clinical signs include diarrhea, low milk production, poor fertility and eventually emaciation and death. We propose a generic model containing bacterial growth, immune control and fluctuations. This proposed generic model can represent the two hypothesized types of transitions in different parameter regimes. The results show that the sudden transition model provides a simpler explanation of the data, but also suffers from some limitations. We discuss the different immunological mechanism that can explain and support the sudden transition model and the interpretation of each term in the studied model. These conclusions are applicable to a wide variety of diseases, and MAP serves as a good test case based on the large scale measurements of single cow longitudinal profiles in this disease. PMID:26092587
Estimating crop proportions from remotely sensed data
NASA Technical Reports Server (NTRS)
Feiveson, A. H. (Principal Investigator)
1979-01-01
The classification/pixel-count method for estimating the proportion of wheat in each segment is theoretically biased even if all distributional assumptions are met. Alternative ways to estimate crop proportions are examined and their performance testing is considered. Topics covered include general linear functional estimates, the method of moments, and maximum likelihood estimators.
Proportional Reasoning and the Visually Impaired
ERIC Educational Resources Information Center
Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia
2012-01-01
Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…
34 CFR 81.32 - Proportionality.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Proportionality. 81.32 Section 81.32 Education Office of the Secretary, Department of Education GENERAL EDUCATION PROVISIONS ACT-ENFORCEMENT Hearings for Recovery of Funds § 81.32 Proportionality. (a)(1) A recipient that made an unallowable expenditure...
Prospective Elementary School Teachers' Proportional Reasoning
ERIC Educational Resources Information Center
Valverde, Gabriela; Castro, Encarnación
2012-01-01
We present the findings of a study on prospective elementary teachers' proportional reasoning. After describing some of the teachers' performance in solving multiplicative structure problems that involve ratios and relations of direct proportionality between quantities, we were able to establish classifications of their answers according to…
CCSSM Challenge: Graphing Ratio and Proportion
ERIC Educational Resources Information Center
Kastberg, Signe E.; D'Ambrosio, Beatriz S.; Lynch-Davis, Kathleen; Mintos, Alexia; Krawczyk, Kathryn
2013-01-01
A renewed emphasis was placed on ratio and proportional reasoning in the middle grades in the Common Core State Standards for Mathematics (CCSSM). The expectation for students includes the ability to not only compute and then compare and interpret the results of computations in context but also interpret ratios and proportions as they are…
Working Memory Mechanism in Proportional Quantifier Verification
ERIC Educational Resources Information Center
Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria
2014-01-01
The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…
Dinitz, Laura B.
2008-01-01
With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS
The Quadratic Hazard Model for Analyzing Longitudinal Data on Aging, Health, and the Life Span
Yashin, A.I.; Arbeev, K.G.; Akushevich, I.; Kulminski, A.; Ukraintseva, S.V.; Stallard, E.; Land, K.C.
2012-01-01
A better understanding of processes and mechanisms linking human aging with changes in health status and survival requires methods capable of analyzing new data that take into account knowledge about these processes accumulated in the field. In this paper, we describe an approach to analyses of longitudinal data based on the use of stochastic process models of human aging, health, and longevity which allows for incorporating state of the art advances in aging research into the model structure. In particular, the model incorporates the notions of resistance to stresses, adaptive capacity, and “optimal” (normal) physiological states. To capture the effects of exposure to persistent external disturbances, the notions of allostatic adaptation and allostatic load are introduced. These notions facilitate the description and explanation of deviations of individuals’ physiological indices from their normal states, which increase the chances of disease development and death. The model provides a convenient conceptual framework for comprehensive systemic analyses of aging-related changes in humans using longitudinal data and linking these changes with genotyping profiles, morbidity, and mortality risks. The model is used for developing new statistical methods for analyzing longitudinal data on aging, health, and longevity. PMID:22633776
Lovreglio, Ruggiero; Ronchi, Enrico; Maragkos, Georgios; Beji, Tarek; Merci, Bart
2016-11-15
The release of toxic gases due to natural/industrial accidents or terrorist attacks in populated areas can have tragic consequences. To prevent and evaluate the effects of these disasters different approaches and modelling tools have been introduced in the literature. These instruments are valuable tools for risk managers doing risk assessment of threatened areas. Despite the significant improvements in hazard assessment in case of toxic gas dispersion, these analyses do not generally include the impact of human behaviour and people movement during emergencies. This work aims at providing an approach which considers both modelling of gas dispersion and evacuation movement in order to improve the accuracy of risk assessment for disasters involving toxic gases. The approach is applied to a hypothetical scenario including a ship releasing Nitrogen dioxide (NO2) on a crowd attending a music festival. The difference between the results obtained with existing static methods (people do not move) and a dynamic approach (people move away from the danger) which considers people movement with different degrees of sophistication (either a simple linear path or more complex behavioural modelling) is discussed. PMID:27343142
Schobben, H.P.M.; Scholten, M.C.T.; Karman, C.C.
1994-12-31
HAZARD is a relatively simple, probabilistic method for ecological risk assessment that can be added to the traditional water quality models. It can be seen as an advanced form of the traditional PEC/NEC-method. It compares the potential environmental concentration with sensitivity data for a set of species. In principle ecological risk modeling implies handling biological variation and uncertainties with regard to the causal chain from the actual exposure to pollutants to the final effects on marine biota. Probability techniques, known from statistical data processing, can deal with some of these variations and uncertainties. The basic principles, assumptions and limiting conditions will be discussed and illustrated with practical examples, including: the characterization of the sensitivity of marine biota by means of a frequency distribution of effect concentrations; the calculation of the intensity of the actual exposure of biota by a comparison of the temporal and spatial distributions of both pollutants and biota; the adjustment of ecotoxicological data to allow for a specific risk analysis for actual field conditions; application of the model for a risk analysis of incidental pollution by means of a translation of sensitivity data to short exposure times; the principle of calculating the risk of exposure to a mixture of pollutants; a concept in which the capacity of ecological recovery is taken into account.
The hazards of underspecified models: the case of symmetry in everyday predictions.
Sedlmeier, Peter; Kilinç, Berna
2004-07-01
Should one be more confident when predicting the whole (or an event based on a larger sample) from the part (or an event based on a smaller sample) than when predicting the reverse? The relevant literature on judgment under uncertainty argues that such predictions are symmetrical but that, as an empirical matter, people often fail to appreciate this symmetry. The authors show that symmetry in prediction does not necessarily hold. In addition to an empirical study involving predictions about soccer games, they develop a theoretical model showing that, at least for the ranges of numerical values usually found in everyday judgment problems, symmetry in predictions is uncommon when 2 different sample sizes are involved. The complexity of the theoretical model used in this analysis raises questions about model specification in judgmental research. PMID:15250783
Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment
NASA Astrophysics Data System (ADS)
Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc
2013-04-01
Structural and mechanical analyses of rock mass are key components for rock slope stability assessment. The complementary use of photogrammetric techniques [Poropat, 2001] and coupled DFN-DEM models [Harthong et al., 2012] provides a methodology that can be applied to complex 3D configurations. DFN-DEM formulation [Scholtès & Donzé, 2012a,b] has been chosen for modeling since it can explicitly take into account the fracture sets. Analyses conducted in 3D can produce very complex and unintuitive failure mechanisms. Therefore, a modeling strategy must be established in order to identify the key features which control the stability. For this purpose, a realistic case is presented to show the overall methodology from the photogrammetry acquisition to the mechanical modeling. By combining Sirovision and YADE Open DEM [Kozicki & Donzé, 2008, 2009], it can be shown that even for large camera to rock slope ranges (tested about one kilometer), the accuracy of the data are sufficient to assess the role of the structures on the stability of a jointed rock slope. In this case, on site stereo pairs of 2D images were taken to create 3D surface models. Then, digital identification of structural features on the unstable block zone was processed with Sirojoint software [Sirovision, 2010]. After acquiring the numerical topography, the 3D digitalized and meshed surface was imported into the YADE Open DEM platform to define the studied rock mass as a closed (manifold) volume to define the bounding volume for numerical modeling. The discontinuities were then imported as meshed planar elliptic surfaces into the model. The model was then submitted to gravity loading. During this step, high values of cohesion were assigned to the discontinuities in order to avoid failure or block displacements triggered by inertial effects. To assess the respective role of the pre-existing discontinuities in the block stability, different configurations have been tested as well as different degree of
Evaluation of Landslide Hazard Using Models For Dynamic Deformation of The Earth Crust
NASA Astrophysics Data System (ADS)
Ovcharenko, A.; Sokolov, V.; Loh, C.-H.; Wen, K.-L.
The 4D-model (x, y, z, t - geographic coordinates, depth, time) for dynamic defor- mation of the Earth crust is applied for analysis of various catastrophic geodynamic phenomena, for example, the landslides that do not relate with earthquakes. The model of is constructed on the basis the geophysical data: Global Positioning System (GPS) network, Persistent Sea Water Level (PSWL) monitoring and seismic catalogues. It is possible to utilize also other indirect geophysical data that reflect the dynamic process of the Earth crust deformation. The process and results of the modeling are described for the case of Tsao-Ling (Taiwan) area, where a set of catastrophic landslides oc- curred during last 150 years. The development of elastic deformational displacement of the Earth crust is analyzed for this time period. It has been found, that generally the landslides occurred during the period of anomalous behavior of deformation (com- pression/tension changes, high gradient of deformation, etc.). It is also necessary to consider jointly direction of deformational displacement and peculiarities of surface relief (characteristics of mountain slopes). Thus, it is possible to conclude that the slow deformational processes may play a significant role in development of danger- ous geodynamic phenomena on mountain slopes. The method of revealing and analy- sis of potentially dangerous areas may include the follow steps. A - the detailed digital model of surface is used for selection of steep slope areas. B - the 4D-model is applied for the areas and schemes of characteristics of deformational displacement (gradient or vectors of horizontal displacement) are calculated for intervals of one-two years. C - the schemes of surface relief and modeled parameters are used jointly for analysis of dangerous periods.
A Nonparametric Bayesian Approach to Seismic Hazard Modeling Using the ETAS Framework
NASA Astrophysics Data System (ADS)
Ross, G.
2015-12-01
The epidemic-type aftershock sequence (ETAS) model is one of the most popular tools for modeling seismicity and quantifying risk in earthquake-prone regions. Under the ETAS model, the occurrence times of earthquakes are treated as a self-exciting Poisson process where each earthquake briefly increases the probability of subsequent earthquakes occurring soon afterwards, which captures the fact that large mainshocks tend to produce long sequences of aftershocks. A triggering kernel controls the amount by which the probability increases based on the magnitude of each earthquake, and the rate at which it then decays over time. This triggering kernel is usually chosen heuristically, to match the parametric form of the modified Omori law for aftershock decay. However recent work has questioned whether this is an appropriate choice. Since the choice of kernel has a large impact on the predictions made by the ETAS model, avoiding misspecification is crucially important. We present a novel nonparametric version of ETAS which avoids making parametric assumptions, and instead learns the correct specification from the data itself. Our approach is based on the Dirichlet process, which is a modern class of Bayesian prior distribution which allows for efficient inference over an infinite dimensional space of functions. We show how our nonparametric ETAS model can be fit to data, and present results demonstrating that the fit is greatly improved compared to the standard parametric specification. Additionally, we explain how our model can be used to perform probabilistic declustering of earthquake catalogs, to classify earthquakes as being either aftershocks or mainshocks. and to learn the causal relations between pairs of earthquakes.
Atman, C J; Bostrom, A; Fischhoff, B; Morgan, M G
1994-10-01
Many risk communications are intended to help the lay public make complex decisions about risk. To guide risk communicators with this objective, a mental models approach to the design and characterization of risk communications is proposed. Building on text comprehension and mental models research, this approach offers an integrated set of methods to help the risk communication designer choose and analyze risk communication content, structure, and organization. An applied example shows that two radon brochures designed with this approach present roughly the same expert facts as a radon brochure widely distributed by the U.S. EPA but meet higher standards on other content, structure, and organization criteria. PMID:7800862
NASA Astrophysics Data System (ADS)
Voronov, Nikolai; Dikinis, Alexandr
2015-04-01
Modern technologies of remote sensing (RS) open wide opportunities for monitoring and increasing the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. The RS data do not supersede ground-based observations, but they allow to solve new problems in the area of hydrological and meteorological monitoring and forecasting. In particular, the data of satellite, aviation or radar observations may be used for increasing of special-temporal discreteness of hydrometeorological observations. Besides, what seems very promising is conjunctive use of the data of remote sensing, ground-based observations and the "output" of hydrodynamical weather models, which allows to increase significantly the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. Modern technologies of monitoring and forecasting of hazardous of hazardous hydrometeorological phenomena on the basis of conjunctive use of the data of satellite, aviation and ground-based observations, as well as the output data of hydrodynamical weather models are considered. It is noted that an important and promising method of monitoring is bioindication - surveillance over response of the biota to external influence and behavior of animals that are able to be presentient of convulsions of nature. Implement of the described approaches allows to reduce significantly both the damage caused by certain hazardous hydrological and meteorological phenomena and the general level of hydrometeorological vulnerability of certain different-purpose objects and the RF economy as a whole.
Effect of mixed (boundary) pixels on crop proportion estimation
NASA Technical Reports Server (NTRS)
Chhikara, R. S.
1984-01-01
In estimating acreage proportions of crop types in a segment using Landsat data, considerable problem is caused by the presence of mixed pixels. Due to lack of understanding of their spectral characteristics, mixed pixels have been treated in the past as pure while clustering and classifying the segment data. This paper examines this approach of treating mixed pixels as pure pixels and the effect of mixed pixels on the bias and variance of a crop type proportion estimate. First, the spectral response of a boundary pixel is modeled and an analytical expression for the bias and variance of a proportion estimate is obtained. This is followed by a numerical illustration of the effect of mixed pixels on bias and variance. It is shown that as the size of the mixed pixel class increases in a segment, the variance increases, however, such increase does not always affect the bias of the proportion estimate.
FINAL REPORT. MEASUREMENTS AND MODELS FOR HAZARDOUS CHEMICAL AND MIXED WASTES
The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that a) would require a minimum of adjustable parameters and b) could be obtained from available data or data tha...
Toxicity data from laboratory rodents are widely available and frequently used in human health assessments as an animal model. We explore the possibility of using single rodent acute toxicity values to predict chemical toxicity to a diversity of wildlife species and to estimate ...
This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...
Development of Algal Interspecies Correlation Estimation Models for Chemical Hazard Assessment
Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potent...
Bostrom, A; Atman, C J; Fischhoff, B; Morgan, M G
1994-10-01
We propose a decision-analytic framework, called the mental models approach, for evaluating the impact of risk communications. It employs multiple evaluation methods, including think-aloud protocol analysis, problem solving, and a true-false test that allows respondents to express uncertainty about their answers. The approach is illustrated in empirical comparisons of three brochures about indoor radon. PMID:7800863
ERIC Educational Resources Information Center
Terpstra, Teun; Lindell, Michael K.
2013-01-01
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
Models for recurrent gas release event behavior in hazardous waste tanks
Anderson, D.N.; Arnold, B.C.
1994-08-01
Certain radioactive waste storage tanks at the United States Department of Energy Hanford facilities continuously generate gases as a result of radiolysis and chemical reactions. The congealed sludge in these tanks traps the gases and causes the level of the waste within the tanks to rise. The waste level continues to rise until the sludge becomes buoyant and ``rolls over``, changing places with heavier fluid on top. During a rollover, the trapped gases are released, resulting, in a sudden drop in the waste level. This is known as a gas release event (GRE). After a GRE, the wastes leading to another GRE. We present nonlinear time waste re-congeals and gas again accumulates leading to another GRE. We present nonlinear time series models that produce simulated sample paths that closely resemble the temporal history of waste levels in these tanks. The models also imitate the random GRE, behavior observed in the temporal waste level history of a storage tank. We are interested in using the structure of these models to understand the probabilistic behavior of the random variable ``time between consecutive GRE`s``. Understanding the stochastic nature of this random variable is important because the hydrogen and nitrous oxide gases released from a GRE, are flammable and the ammonia that is released is a health risk. From a safety perspective, activity around such waste tanks should be halted when a GRE is imminent. With credible GRE models, we can establish time windows in which waste tank research and maintenance activities can be safely performed.
Applying Distributed, Coupled Hydrological Slope-Stability Models for Landslide Hazard Assessments
NASA Astrophysics Data System (ADS)
Godt, J. W.; Baum, R. L.; Lu, N.; Savage, W. Z.; McKenna, J. P.
2006-12-01
Application of distributed, coupled hydrological slope-stability models requires knowledge of hydraulic and material-strength properties at the scale of landslide processes. We describe results from a suite of laboratory and field tests that were used to define the soil-water characteristics of landslide-prone colluvium on the steep coastal bluffs in the Seattle, Washington area and then use these results in a coupled model. Many commonly used tests to determine soil-water characteristics are performed for the drying process. Because most soils display a pronounced hysteresis in the relation between moisture content and matric suction, results from such tests may not accurately describe the soil-water characteristics for the wetting process during rainfall infiltration. Open-tube capillary-rise and constant-flow permeameter tests on bluff colluvium were performed in the laboratory to determine the soil-water characteristic curves (SWCC) and unsaturated hydraulic conductivity functions (HCF) for the wetting process. Field-tests using a borehole permeameter were used to determine the saturated hydraulic conductivity of colluvial materials. Measurements of pore-water response to rainfall were used in an inverse numerical modeling procedure to determine the in-situ hydraulic parameters of hillside colluvium at the scale of the instrument installation. Comparison of laboratory and field results show that although both techniques generally produce SWCCs and HCFs with similar shapes, differences in bulk density among field and lab tests yield differences in saturated moisture content and saturated hydrologic conductivity. We use these material properties in an application of a new version of a distributed transient slope stability model (TRIGRS) that accounts for the effects of the unsaturated zone on the infiltration process. Applied over a LiDAR-based digital landscape of part of the Seattle area for an hourly rainfall history known to trigger shallow landslides, the
NASA Astrophysics Data System (ADS)
Quentel, E.; Loevenbruck, A.; Hébert, H.
2012-04-01
The catastrophic 2004 tsunami drew the international community's attention to tsunami risk in all basins where tsunamis occurred but no warning system exists. Consequently, under the coordination of UNESCO, France decided to create a regional center, called CENALT, for the north-east Atlantic and the western Mediterranean. This warning system, which should be operational by 2012, is set up by the CEA in collaboration with the SHOM and the CNRS. The French authorities are in charge of the top-down alert system including the local alert dissemination. In order to prepare the appropriate means and measures, they initiated the ALDES (Alerte Descendante) project to which the CEA also contributes. It aims at examining along the French Mediterranean coast the tsunami risk related to earthquakes and landslides. In addition to the evaluation at regional scale, it includes the detailed studies of 3 selected sites; the local alert system will be designed for one of them : the French Riviera. In this project, our main task at CEA consists in assessing tsunami hazard related to seismic sources using numerical modeling. Past tsunamis have affected the west Mediterranean coast but are too few and poorly documented to provide a suitable database. Thus, a synthesis of earthquakes representative of the tsunamigenic seismic activity and prone to induce the largest impact to the French coast is performed based on historical data, seismotectonics and first order models. The North Africa Margin, the Ligurian and the South Tyrrhenian Seas are considered as the main tsunamigenic zones. In order to forecast the most important plausible effects, the magnitudes are estimated by enhancing to some extent the largest known values. Our hazard estimation is based on the simulation of the induced tsunamis scenarios performed with the CEA code. The 3 sites have been chosen according to the regional hazard studies, coastal typology elements and the appropriate DTMs (Digital Terrain Models). The
Survival Extrapolation in the Presence of Cause Specific Hazards
Benaglia, Tatiana; Jackson, Christopher H.; Sharples, Linda D.
2016-01-01
Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Since causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators (ICD) for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from ICD. PMID:25413028
NASA Astrophysics Data System (ADS)
Hadley, Brian Christopher
This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.