Proportional Hazards Models of Graduation
ERIC Educational Resources Information Center
Chimka, Justin R.; Reed-Rhoads, Teri; Barker, Kash
2008-01-01
Survival analysis is a statistical tool used to describe the duration between events. Many processes in medical research, engineering, and economics can be described using survival analysis techniques. This research involves studying engineering college student graduation using Cox proportional hazards models. Among male students with American…
Proportional Hazards Models of Graduation
ERIC Educational Resources Information Center
Chimka, Justin R.; Reed-Rhoads, Teri; Barker, Kash
2008-01-01
Survival analysis is a statistical tool used to describe the duration between events. Many processes in medical research, engineering, and economics can be described using survival analysis techniques. This research involves studying engineering college student graduation using Cox proportional hazards models. Among male students with American…
Tree-augmented Cox proportional hazards models.
Su, Xiaogang; Tsai, Chih-Ling
2005-07-01
We study a hybrid model that combines Cox proportional hazards regression with tree-structured modeling. The main idea is to use step functions, provided by a tree structure, to 'augment' Cox (1972) proportional hazards models. The proposed model not only provides a natural assessment of the adequacy of the Cox proportional hazards model but also improves its model fitting without loss of interpretability. Both simulations and an empirical example are provided to illustrate the use of the proposed method.
Dynamic reliability models with conditional proportional hazards.
Hollander, M; Peña, E A
1995-01-01
A dynamic approach to the stochastic modelling of reliability systems is further explored. This modelling approach is particularly appropriate for load-sharing, software reliability, and multivariate failure-time models, where component failure characteristics are affected by their degree of use, amount of load, or extent of stresses experienced. This approach incorporates the intuitive notion that when a set of components in a coherent system fail at a certain time, there is a 'jump' from one structure function to another which governs the residual lifetimes of the remaining functioning components, and since the component lifetimes are intrinsically affected by the structure function which they constitute, then at such a failure time there should also be a jump in the stochastic structure of the lifetimes of the remaining components. For such dynamically-modelled systems, the stochastic characteristics of their jump times are studied. These properties of the jump times allow us to obtain the properties of the lifetime of the system. In particular, for a Markov dynamic model, specific expressions for the exact distribution function of the jump times are obtained for a general coherent system, a parallel system, and a series-parallel system. We derive a new family of distribution functions which describes the distributions of the jump times for a dynamically-modelled system.
Adjusted variable plots for Cox's proportional hazards regression model.
Hall, C B; Zeger, S L; Bandeen-Roche, K J
1996-01-01
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.
Penalized estimation for proportional hazards models with current status data.
Lu, Minggen; Li, Chin-Shang
2017-09-05
We provide a simple and practical, yet flexible, penalized estimation method for a Cox proportional hazards model with current status data. We approximate the baseline cumulative hazard function by monotone B-splines and use a hybrid approach based on the Fisher-scoring algorithm and the isotonic regression to compute the penalized estimates. We show that the penalized estimator of the nonparametric component achieves the optimal rate of convergence under some smooth conditions and that the estimators of the regression parameters are asymptotically normal and efficient. Moreover, a simple variance estimation method is considered for inference on the regression parameters. We perform 2 extensive Monte Carlo studies to evaluate the finite-sample performance of the penalized approach and compare it with the 3 competing R packages: C1.coxph, intcox, and ICsurv. A goodness-of-fit test and model diagnostics are also discussed. The methodology is illustrated with 2 real applications. Copyright © 2017 John Wiley & Sons, Ltd.
The consequences of proportional hazards based model selection.
Campbell, H; Dean, C B
2014-03-15
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well-accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two-stage testing strategy (2008) for ameliorating these effects. Copyright © 2013 John Wiley & Sons, Ltd.
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.
Wang, Wei; Albert, Jeffrey M
2017-08-01
An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.
A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2016-01-01
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2016-01-01
In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…
Huang, Xuelin
2012-01-01
Summary We propose a semiparametrically efficient estimation of a broad class of transformation regression models for non-proportional hazards data. Classical transformation models are to be viewed from a frailty model paradigm, and the proposed method provides a unified approach that is valid for both continuous and discrete frailty models. The proposed models are shown to be flexible enough to model long-term follow-up survival data when the treatment effect diminishes over time, a case for which the proportional hazards or proportional odds assumption is violated, or a situation in which a substantial proportion of patients remains cured after treatment. Estimation of the link parameter in frailty distribution, considered to be unknown and possibly dependent on a time-independent covariates, is automatically included in the proposed methods. The observed information matrix is computed to evaluate the variances of all the parameter estimates. Our likelihood-based approach provides a natural way to construct simple statistics for testing the proportional hazards and proportional odds assumptions for usual survival data or testing the short-term and long-term effects for survival data with a cure fraction. Simulation studies demonstrate that the proposed inference procedures perform well in realistic settings. Applications to two medical studies are provided. PMID:23005582
A Bayesian Semiparametric Temporally–Stratified Proportional Hazards Model with Spatial Frailties
Hanson, Timothy E.; Jara, Alejandro; Zhao, Luping
2011-01-01
Incorporating temporal and spatial variation could potentially enhance information gathered from survival data. This paper proposes a Bayesian semiparametric model for capturing spatio–temporal heterogeneity within the proportional hazards framework. The spatial correlation is introduced in the form of county–level frailties. The temporal effect is introduced by considering the stratification of the proportional hazards model, where the time–dependent hazards are indirectly modeled using a probability model for related probability distributions. With this aim, an autoregressive dependent tailfree process is introduced. The full Kullback–Leibler support of the proposed process is provided. The approach is illustrated using simulated and data from the Surveillance Epidemiology and End Results database of the National Cancer Institute on patients in Iowa diagnosed with breast cancer. PMID:22247752
ERIC Educational Resources Information Center
Rasmussen, Andrew
2004-01-01
This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…
A model checking method for the proportional hazards model with recurrent gap time data
Huang, Chiung-Yu; Luo, Xianghua; Follmann, Dean A.
2011-01-01
Recurrent events are the natural outcome in many medical and epidemiology studies. To assess covariate effects on the gaps between consecutive recurrent events, the Cox proportional hazards model is frequently employed in data analysis. The validity of statistical inference, however, depends on the appropriateness of the Cox model. In this paper, we propose a class of graphical techniques and formal tests for checking the Cox model with recurrent gap time data. The building block of our model checking method is an averaged martingale-like process, based on which a class of multiparameter stochastic processes is proposed. This maneuver is very general and can be used to assess different aspects of model fit. Numerical simulations are conducted to examine finite-sample performance, and the proposed model checking techniques are illustrated with data from the Danish Psychiatric Central Register. PMID:21138876
Proportional hazards model for competing risks data with missing cause of failure
Hyun, Seunggeun; Sun, Yanqing
2012-01-01
We consider the semiparametric proportional hazards model for the cause-specific hazard function in analysis of competing risks data with missing cause of failure. The inverse probability weighted equation and augmented inverse probability weighted equation are proposed for estimating the regression parameters in the model, and their theoretical properties are established for inference. Simulation studies demonstrate that the augmented inverse probability weighted estimator is doubly robust and the proposed method is appropriate for practical use. The simulations also compare the proposed estimators with the multiple imputation estimator of Lu and Tsiatis (2001). The application of the proposed method is illustrated using data from a bone marrow transplant study. PMID:22468017
Multi-parameter regression survival modeling: An alternative to proportional hazards.
Burke, K; MacKenzie, G
2016-11-28
It is standard practice for covariates to enter a parametric model through a single distributional parameter of interest, for example, the scale parameter in many standard survival models. Indeed, the well-known proportional hazards model is of this kind. In this article, we discuss a more general approach whereby covariates enter the model through more than one distributional parameter simultaneously (e.g., scale and shape parameters). We refer to this practice as "multi-parameter regression" (MPR) modeling and explore its use in a survival analysis context. We find that multi-parameter regression leads to more flexible models which can offer greater insight into the underlying data generating process. To illustrate the concept, we consider the two-parameter Weibull model which leads to time-dependent hazard ratios, thus relaxing the typical proportional hazards assumption and motivating a new test of proportionality. A novel variable selection strategy is introduced for such multi-parameter regression models. It accounts for the correlation arising between the estimated regression coefficients in two or more linear predictors-a feature which has not been considered by other authors in similar settings. The methods discussed have been implemented in the mpr package in R.
Comparing proportional hazards and accelerated failure time models for survival analysis.
Orbe, Jesus; Ferreira, Eva; Núñez-Antón, Vicente
2002-11-30
This paper describes a method proposed for a censored linear regression model that can be used in the context of survival analysis. The method has the important characteristic of allowing estimation and inference without knowing the distribution of the duration variable. Moreover, it does not need the assumption of proportional hazards. Therefore, it can be an interesting alternative to the Cox proportional hazards models when this assumption does not hold. In addition, implementation and interpretation of the results is simple. In order to analyse the performance of this methodology, we apply it to two real examples and we carry out a simulation study. We present its results together with those obtained with the traditional Cox model and AFT parametric models. The new proposal seems to lead to more precise results. Copyright 2002 John Wiley & Sons, Ltd.
Using the proportional hazards model to study heart valve replacement data.
Bunday, B D; Kiri, V A; Stoodley, K D
1992-01-01
The proportional hazards model is used to study the effect of various concomitant variables on the time to valve failure, mortality, or other complications, for patients who have had artificial heart valves inserted. The data are from a database, which is still being assembled as more information is acquired, at Killingbeck Hospital. A suite of computer programs, not specifically developed with this application in mind, has been used to carry out the exploratory data analysis, the estimation of parameters and the validation of the model. These three elements of the analysis are all illustrated. The present report is seen as a preliminary study to assess the usefulness of the proportional hazards model in this area. Follow-up work as more data are accumulated is intended.
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny
2003-01-01
In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.
NASA Astrophysics Data System (ADS)
Zhang, Qing; Hua, Cheng; Xu, Guanghua
2014-02-01
As mechanical systems increase in complexity, it is becoming more and more common to observe multiple failure modes. The system failure can be regarded as the result of interaction and competition between different failure modes. It is therefore necessary to combine multiple failure modes when analysing the failure of an overall system. In this paper, a mixture Weibull proportional hazard model (MWPHM) is proposed to predict the failure of a mechanical system with multiple failure modes. The mixed model parameters are estimated by combining historical lifetime and monitoring data of all failure modes. In addition, the system failure probability density is obtained by proportionally mixing the failure probability density of multiple failure modes. Monitoring data are input into the MWPHM to estimate the system reliability and predict the system failure time. A simulated sample set is used to verify the ability of the MWPHM to model multiple failure modes. Finally, the MWPHM and the traditional Weibull proportional hazard model (WPHM) are applied to a high-pressure water descaling pump, which has two failure modes: sealing ring wear and thrust bearing damage. Results show that the MWPHM is greatly superior in system failure prediction to the WPHM.
[Clinical research XXII. From clinical judgment to Cox proportional hazards model].
Pérez-Rodríguez, Marcela; Rivas-Ruiz, Rodolfo; Palacios-Cruz, Lino; Talavera, Juan O
2014-01-01
Survival analyses are commonly used to determine the time of an event (for example, death). However, they can be used also for other clinical outcomes on the condition that these are dichotomous, for example healing time. These analyses only consider the relationship of one variable. However, Cox proportional hazards model is a multivariate analysis of the survival analysis, in which other potentially confounding covariates of the effect of the main maneuver studied, such as age, gender or disease stage, are taken into account. This analysis can include both quantitative and qualitative variables in the model. The measure of association used is called hazard ratio (HR) or relative risk ratio, which is not the same as the relative risk or odds ratio (OR). The difference is that the HR refers to the possibility that one of the groups develops the event before it is compared with the other group. The proportional hazards multivariate model of Cox is the most widely used in medicine when the phenomenon is studied in two dimensions: time and event.
Westreich, Daniel; Cole, Stephen R; Schisterman, Enrique F; Platt, Robert W
2012-08-30
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
Conditional Akaike information under generalized linear and proportional hazards mixed models
Donohue, M. C.; Overholser, R.; Xu, R.; Vaida, F.
2011-01-01
We study model selection for clustered data, when the focus is on cluster specific inference. Such data are often modelled using random effects, and conditional Akaike information was proposed in Vaida & Blanchard (2005) and used to derive an information criterion under linear mixed models. Here we extend the approach to generalized linear and proportional hazards mixed models. Outside the normal linear mixed models, exact calculations are not available and we resort to asymptotic approximations. In the presence of nuisance parameters, a profile conditional Akaike information is proposed. Bootstrap methods are considered for their potential advantage in finite samples. Simulations show that the performance of the bootstrap and the analytic criteria are comparable, with bootstrap demonstrating some advantages for larger cluster sizes. The proposed criteria are applied to two cancer datasets to select models when the cluster-specific inference is of interest. PMID:22822261
Estimation of Stratified Mark-Specific Proportional Hazards Models with Missing Marks
Sun, Yanqing; Gilbert, Peter B.
2013-01-01
An objective of randomized placebo-controlled preventive HIV vaccine efficacy trials is to assess the relationship between the vaccine effect to prevent infection and the genetic distance of the exposing HIV to the HIV strain represented in the vaccine construct. Motivated by this objective, recently a mark-specific proportional hazards model with a continuum of competing risks has been studied, where the genetic distance of the transmitting strain is the continuous `mark' defined and observable only in failures. A high percentage of genetic marks of interest may be missing for a variety of reasons, predominantly due to rapid evolution of HIV sequences after transmission before a blood sample is drawn from which HIV sequences are measured. This research investigates the stratified mark-specific proportional hazards model with missing marks where the baseline functions may vary with strata. We develop two consistent estimation approaches, the first based on the inverse probability weighted complete-case (IPW) technique, and the second based on augmenting the IPW estimator by incorporating auxiliary information predictive of the mark. We investigate the asymptotic properties and finite-sample performance of the two estimators, and show that the augmented IPW estimator, which satisfies a double robustness property, is more efficient. PMID:23519918
NPHMC: an R-package for estimating sample size of proportional hazards mixture cure model.
Cai, Chao; Wang, Songfeng; Lu, Wenbin; Zhang, Jiajia
2014-01-01
Due to advances in medical research, more and more diseases can be cured nowadays, which largely increases the need for an easy-to-use software in calculating sample size of clinical trials with cure fractions. Current available sample size software, such as PROC POWER in SAS, Survival Analysis module in PASS, powerSurvEpi package in R are all based on the standard proportional hazards (PH) model which is not appropriate to design a clinical trial with cure fractions. Instead of the standard PH model, the PH mixture cure model is an important tool in handling the survival data with possible cure fractions. However, there are no tools available that can help design a trial with cure fractions. Therefore, we develop an R package NPHMC to determine the sample size needed for such study design. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NPHMC: An R-package for Estimating Sample Size of Proportional Hazards Mixture Cure Model
Cai, Chao; Wang, Songfeng; Lu, Wenbin; Zhang, Jiajia
2013-01-01
Due to advances in medical research, more and more diseases can be cured nowadays, which largely increases the need for an easy-to-use software in calculating sample size of clinical trials with cure fractions. Current available sample size software, such as PROC POWER in SAS, Survival Analysis module in PASS, powerSurvEpi package in R are all based on the standard proportional hazards (PH) model which is not appropriate to design a clinical trial with cure fractions. Instead of the standard PH model, the PH mixture cure model is an important tool in handling the survival data with possible cure fractions. However, there are no tools available that can help design a trial with cure fractions. Therefore, we develop an R package NPHMC to determine the sample size needed for such study design. PMID:24199658
[[Proportional hazards model analysis of women's reproductive career in present-day Japan
Otani, K
1989-01-01
The 1st section of this paper, using a method to decompose a change in the total marital fertility rate into the quantum and tempo effects, confirmed the major effect of birth timing on the 1970s trends in the total marital fertility rate based on the pooled data of the 8th and 9th Japanese National Fertility Surveys of 1982 and 1987, respectively. In the next section, proportional hazards model analyses of the 1st, 2nd, and 3rd birth functions for the pooled data made it clear that the 1st and 2nd birth intervals among the marriage cohorts since the end of the 1960s were shorter than those in the early 1960s even after having controlled for other variables. Given that 1 conception or more can occur during a birth interval, proportional hazards model analyses were again utilized to examine the effects of various variables on the time elapsed since marriage and the 1st conception, the time between the end of the 1st pregnancy and the 2nd conception, and the time between the end of the 2nd pregnancy and the 3rd conception. The authors found that the 1st-conception probability of the marriage cohorts since the late 1960s was smaller than that of predecessors, while the 2nd-conception probability was not affected by marriage cohort. In the last section, a logistic regression analysis of the probability of induced abortion showed that the probability of aborting a 2nd pregnancy decreased in the 1970s compared with that in the early 1960s. When a proportional hazards model analysis of the 2nd birth function was applied after omitting those cases where the 1st pregnancy did not result in birth and/or the 2nd pregnancy was aborted, the strong effect of marriage cohort on the 2nd birth probability was substantially diluted. These facts suggest that the shortened 2nd birth interval in the 1970s was partly caused by the shrinking probability of aborting a 2nd pregnancy in this period.
Extension of a Cox proportional hazards cure model when cure information is partially known
Wu, Yu; Lin, Yong; Lu, Shou-En; Li, Chin-Shang; Shih, Weichung Joe
2014-01-01
When there is evidence of long-term survivors, cure models are often used to model the survival curve. A cure model is a mixture model consisting of a cured fraction and an uncured fraction. Traditional cure models assume that the cured or uncured status in the censored set cannot be distinguished. But in many practices, some diagnostic procedures may provide partial information about the cured or uncured status relative to certain sensitivity and specificity. The traditional cure model does not take advantage of this additional information. Motivated by a clinical study on bone injury in pediatric patients, we propose a novel extension of a traditional Cox proportional hazards (PH) cure model that incorporates the additional information about the cured status. This extension can be applied when the latency part of the cure model is modeled by the Cox PH model. Extensive simulations demonstrated that the proposed extension provides more efficient and less biased estimations, and the higher efficiency and smaller bias is associated with higher sensitivity and specificity of diagnostic procedures. When the proposed extended Cox PH cure model was applied to the motivating example, there was a substantial improvement in the estimation. PMID:24511081
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
Sparse Estimation of Cox Proportional Hazards Models via Approximated Information Criteria
Fan, Juanjuan; Zhang, Ying
2016-01-01
Summary We propose a new sparse estimation method for Cox (1972) proportional hazards models by optimizing an approximated information criterion. The main idea involves approximation of the ℓ0 norm with a continuous or smooth unit dent function. The proposed method bridges the best subset selection and regularisation by borrowing strength from both. It mimics the best subset selection using a penalised likelihood approach yet with no need of a tuning parameter. We further reformulate the problem with a reparameterisation step so that it reduces to one unconstrained nonconvex yet smooth programming problem, which can be solved efficiently as in computing the maximum partial likelihood estimator (MPLE). Furthermore, the reparameterisation tactic yields an additional advantage in terms of circumventing post-selection inference. The oracle property of the proposed method is established. Both simulated experiments and empirical examples are provided for assessment and illustration. PMID:26873398
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Gayat, Etienne; Resche-Rigon, Matthieu; Mary, Jean-Yves; Porcher, Raphaël
2012-01-01
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity-score-matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity-score-matched data, using a robust estimator of the variance.
Xiong, Xiaoping; Wu, Jianrong
2017-01-01
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models.
Multiple imputation of missing covariates for the Cox proportional hazards cure model.
Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G
2016-11-20
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects 'cured', and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification. We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
On corrected score approach for proportional hazards model with covariate measurement error.
Song, Xiao; Huang, Yijian
2005-09-01
In the presence of covariate measurement error with the proportional hazards model, several functional modeling methods have been proposed. These include the conditional score estimator (Tsiatis and Davidian, 2001, Biometrika 88, 447-458), the parametric correction estimator (Nakamura, 1992, Biometrics 48, 829-838), and the nonparametric correction estimator (Huang and Wang, 2000, Journal of the American Statistical Association 95, 1209-1219) in the order of weaker assumptions on the error. Although they are all consistent, each suffers from potential difficulties with small samples and substantial measurement error. In this article, upon noting that the conditional score and parametric correction estimators are asymptotically equivalent in the case of normal error, we investigate their relative finite sample performance and discover that the former is superior. This finding motivates a general refinement approach to parametric and nonparametric correction methods. The refined correction estimators are asymptotically equivalent to their standard counterparts, but have improved numerical properties and perform better when the standard estimates do not exist or are outliers. Simulation results and application to an HIV clinical trial are presented.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate.
Katsahian, Sandrine; Resche-Rigon, Matthieu; Chevret, Sylvie; Porcher, Raphaël
2006-12-30
In the competing-risks setting, to test the effect of a covariate on the probability of one particular cause of failure, the Fine and Gray model for the subdistribution hazard can be used. However, sometimes, competing risks data cannot be considered as independent because of a clustered design, for instance in registry cohorts or multicentre clinical trials. Frailty models have been shown useful to analyse such clustered data in a classical survival setting, where only one risk acts on the population. Inclusion of random effects in the subdistribution hazard has not been assessed yet. In this work, we propose a frailty model for the subdistribution hazard. This allows first to assess the heterogeneity across clusters, then to incorporate such an effect when testing the effect of a covariate of interest. Based on simulation study, the effect of the presence of heterogeneity on testing for covariate effects was studied. Finally, the model was illustrated on a data set from a registry cohort of patients with acute myeloid leukaemia who underwent bone marrow transplantation.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Kim, Sehwan; Schonwetter, Ronald; Djulbegovic, Benjamin
2012-01-01
Prognostic models are often used to estimate the length of patient survival. The Cox proportional hazards model has traditionally been applied to assess the accuracy of prognostic models. However, it may be suboptimal due to the inflexibility to model the baseline survival function and when the proportional hazards assumption is violated. The aim of this study was to use internal validation to compare the predictive power of a flexible Royston-Parmar family of survival functions with the Cox proportional hazards model. We applied the Palliative Performance Scale on a dataset of 590 hospice patients at the time of hospice admission. The retrospective data were obtained from the Lifepath Hospice and Palliative Care center in Hillsborough County, Florida, USA. The criteria used to evaluate and compare the models' predictive performance were the explained variation statistic R2, scaled Brier score, and the discrimination slope. The explained variation statistic demonstrated that overall the Royston-Parmar family of survival functions provided a better fit (R2 = 0.298; 95% CI: 0.236–0.358) than the Cox model (R2 = 0.156; 95% CI: 0.111–0.203). The scaled Brier scores and discrimination slopes were consistently higher under the Royston-Parmar model. Researchers involved in prognosticating patient survival are encouraged to consider the Royston-Parmar model as an alternative to Cox. PMID:23082220
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Kim, Sehwan; Schonwetter, Ronald; Djulbegovic, Benjamin
2012-01-01
Prognostic models are often used to estimate the length of patient survival. The Cox proportional hazards model has traditionally been applied to assess the accuracy of prognostic models. However, it may be suboptimal due to the inflexibility to model the baseline survival function and when the proportional hazards assumption is violated. The aim of this study was to use internal validation to compare the predictive power of a flexible Royston-Parmar family of survival functions with the Cox proportional hazards model. We applied the Palliative Performance Scale on a dataset of 590 hospice patients at the time of hospice admission. The retrospective data were obtained from the Lifepath Hospice and Palliative Care center in Hillsborough County, Florida, USA. The criteria used to evaluate and compare the models' predictive performance were the explained variation statistic R(2), scaled Brier score, and the discrimination slope. The explained variation statistic demonstrated that overall the Royston-Parmar family of survival functions provided a better fit (R(2) =0.298; 95% CI: 0.236-0.358) than the Cox model (R(2) =0.156; 95% CI: 0.111-0.203). The scaled Brier scores and discrimination slopes were consistently higher under the Royston-Parmar model. Researchers involved in prognosticating patient survival are encouraged to consider the Royston-Parmar model as an alternative to Cox.
Testing Goodness-of-Fit for the Proportional Hazards Model based on Nested Case-Control Data
Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau
2014-01-01
Summary Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this paper, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen’s maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms’ Tumor Study is given to illustrate the methodology. PMID:25298193
Testing goodness-of-fit for the proportional hazards model based on nested case-control data.
Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau
2014-12-01
Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this article, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen's maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms' Tumor Study is given to illustrate the methodology. © 2014, The International Biometric Society.
Goodness-of-fit test of the stratified mark-specific proportional hazards model with continuous mark
Sun, Yanqing; Li, Mei; Gilbert, Peter B.
2014-01-01
Motivated by the need to assess HIV vaccine efficacy, previous studies proposed an extension of the discrete competing risks proportional hazards model, in which the cause of failure is replaced by a continuous mark only observed at the failure time. However the model assumptions may fail in several ways, and no diagnostic testing procedure for this situation has been proposed. A goodness-of-fit test procedure for the stratified mark-specific proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazards depends nonparametrically on both time and the mark is proposed. The test statistics are constructed based on the weighted cumulative mark-specific martingale residuals. The critical values of the proposed test statistics are approximated using the Gaussian multiplier method. The performance of the proposed tests are examined extensively in simulations for a variety of the models under the null hypothesis and under different types of alternative models. An analysis of the ‘Step’ HIV vaccine efficacy trial using the proposed method is presented. The analysis suggests that the HIV vaccine candidate may increase susceptibility to HIV acquisition. PMID:26461462
Sun, Yanqing; Li, Mei; Gilbert, Peter B.
2013-01-01
For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates (Prentice and others, 1978. The analysis of failure time in the presence of competing risks. Biometrics 34, 541–554). Inspired by previous research in HIV vaccine efficacy trials, the cause of failure is replaced by a continuous mark observed only in subjects who fail. This article studies an extension of this approach to allow a multivariate continuum of competing risks, to better account for the fact that the candidate HIV vaccines tested in efficacy trials have contained multiple HIV sequences, with a purpose to elicit multiple types of immune response that recognize and block different types of HIV viruses. We develop inference for the proportional hazards model in which the regression parameters depend parametrically on the marks, to avoid the curse of dimensionality, and the baseline hazard depends nonparametrically on both time and marks. Goodness-of-fit tests are constructed based on generalized weighted martingale residuals. The finite-sample performance of the proposed methods is examined through extensive simulations. The methods are applied to a vaccine efficacy trial to examine whether and how certain antigens represented inside the vaccine are relevant for protection or anti-protection against the exposing HIVs. PMID:22764174
HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE
2015-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534
Sasaki, Osamu; Aihara, Mitsuo; Hagiya, Koichi; Nishiura, Akiko; Ishii, Kazuo; Satoh, Masahiro
2012-02-01
The objective of this study was to confirm the stability of the genetic estimation of longevity of the Holstein population in Japan. Data on the first 10 lactation periods were obtained from the Livestock Improvement Association of Japan. Longevity was defined as the number of days from first calving until culling or censoring. DATA1 and DATA2 included the survival records for the periods 1991-2003 and 1991-2005, respectively. The proportional hazard model included the effects of the region-parity-lactation stage-milk yield class, age at first calving, the herd-year-season, and sire. The heritabilities on an original scale of DATA1 and DATA2 were 0.119 and 0.123, respectively. The estimated transmitting abilities (ETAs) of young sires in DATA1 may have been underestimated, but coefficient δ, which indicated the bias of genetic trend between DATA1 and DATA2, was not significant. The regression coefficient of ETAs between DATA1 and DATA2 was very close to 1. The proportional hazard model could steadily estimate the ETA for longevity of the sires in Japan.
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.
Imbayarwo-Chikosi, V E; Ducrocq, V; Banga, C B; Halimani, T E; van Wyk, J B; Maiwashe, A; Dzama, K
2017-03-14
Non-genetic factors influencing functional longevity and the heritability of the trait were estimated in South African Holsteins using a piecewise Weibull proportional hazards model. Data consisted of records of 161,222 of daughters of 2,051 sires calving between 1995 and 2013. The reference model included fixed time-independent age at first calving and time-dependent interactions involving lactation number, region, season and age of calving, within-herd class of milk production, fat and protein content, class of annual variation in herd size and the random herd-year effect. Random sire and maternal grandsire effects were added to the model to estimate genetic parameters. The within-lactation Weibull baseline hazards were assumed to change at 0, 270, 380 days and at drying date. Within-herd milk production class had the largest contribution to the relative risk of culling. Relative culling risk increased with lower protein and fat per cent production classes and late age at first calving. Cows in large shrinking herds also had high relative risk of culling. The estimate of the sire genetic variance was 0.0472 ± 0.0017 giving a theoretical heritability estimate of 0.11 in the complete absence of censoring. Genetic trends indicated an overall decrease in functional longevity of 0.014 standard deviation from 1995 to 2007. There are opportunities for including the trait in the breeding objective for South African Holstein cattle.
Li, Shuli; Gray, Robert J
2016-09-01
We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups. © 2016, The International Biometric Society.
Rahman, Mostafizur; Shariff, Asma Ahmad; Shafie, Aziz; Saaid, Rahmah; Tahir, Rohayatimah Md
2015-07-31
Caesarean delivery (C-section) rates have been increasing dramatically in the past decades around the world. This increase has been attributed to multiple factors such as maternal, socio-demographic and institutional factors and is a burning issue of global aspect like in many developed and developing countries. Therefore, this study examines the relationship between mode of delivery and time to event with provider characteristics (i.e., covariates) respectively. The study is based on a total of 1142 delivery cases from four private and four public hospitals maternity wards. Logistic regression and Cox proportional hazard models were the statistical tools of the present study. The logistic regression of multivariate analysis indicated that the risk of having a previous C-section, prolonged labour, higher educational level, mother age 25 years and above, lower order of birth, length of baby more than 45 cm and irregular intake of balanced diet were significantly predict for C-section. With regard to survival time, using the Cox model, fetal distress, previous C-section, mother's age, age at marriage and order of birth were also the most independent risk factors for C-section. By the forward stepwise selection, the study reveals that the most common factors were previous C-section, mother's age and order of birth in both analysis. As shown in the above results, the study suggests that these factors may influence the health-seeking behaviour of women. Findings suggest that program and policies need to address the increase rate of caesarean delivery in Northern region of Bangladesh. Also, for determinant of risk factors, the result of Akaike Information Criterion (AIC) indicated that logistic model is an efficient model.
Casellas, J
2016-03-01
Age at first lambing (AFL) plays a key role on the reproductive performance of sheep flocks, although there are no genetic selection programs accounting for this trait in the sheep industry. This could be due to the non-Gaussian distribution pattern of AFL data, which must be properly accounted for by the analytical model. In this manuscript, two different parameterizations were implemented to analyze AFL in the Ripollesa sheep breed, that is, the skew-Gaussian mixed linear model (sGML) and the piecewise Weibull proportional hazards model (PWPH). Data were available from 10 235 ewes born between 1972 and 2013 in 14 purebred Ripollesa flocks located in the north-east region of Spain. On average, ewes gave their first lambing short after their first year and a half of life (590.9 days), and within-flock averages ranged between 523.4 days and 696.6 days. Model fit was compared using the deviance information criterion (DIC; the smaller the DIC statistic, the better the model fit). Model sGML was clearly penalized (DIC=200 059), whereas model PWPH provided smaller estimates and reached the minimum DIC when one cut point was added to the initial Weibull model (DIC=132 545). The pure Weibull baseline and parameterizations with two or more cut points were discarded due to larger DIC estimates (>134 200). The only systematic effect influencing AFL was the season of birth, where summer- and fall-born ewes showed a remarkable shortening of their AFL, whereas neither birth type nor birth weight had a relevant impact on this reproductive trait. On the other hand, heritability on the original scale derived from model PWPH was high, with a model estimate place at 0.114 and its highest posterior density region ranging from 0.079 and 0.143. As conclusion, Gaussian-related mixed linear models should be avoided when analyzing AFL, whereas model PWPH must be viewed as better alternative with superior goodness of fit; moreover, the additive genetic background underlying this
Gilbert, Peter B.; Sun, Yanqing
2014-01-01
This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model in the presence of missing marks. The motivating application is preventive HIV vaccine efficacy trials, where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. The test statistics are constructed based on two-stage efficient estimators, which utilize auxiliary predictors of the missing marks. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The methods are applied to the RV144 vaccine trial. PMID:25641990
Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita
2016-05-15
Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes. Copyright © 2016. Published by Elsevier B.V.
Casellas, J; Tarrés, J; Piedrafita, J; Varona, L
2006-10-01
Given that correct assumptions on the baseline survival function are determinant for the validity of further inferences, specific tools to test the fit of a model to real data become essential in proportional hazards models. In this sense, we have proposed a parametric bootstrap to test the fit of survival models. Monte Carlo simulations are used to generate new data sets from the estimates obtained through the assumed models, and then bootstrap intervals can be established for the survival function along the time space studied. Significant fitting deficiencies are revealed when the real survival function is not included within the bootstrap interval. We tested this procedure in a survival data set of Bruna dels Pirineus beef calves, assuming 4 parametric models (exponential, Weibull, exponential time-dependent, Weibull time-dependent) and the Cox's semiparametric model. Fitting deficiencies were not observed for the Cox's model and the exponential time-dependent model, whereas the Weibull time-dependent model suffered from moderate overestimation at different ages. Thus, the exponential time-dependent model appears to be preferable because of its correct fit for survival data of beef calves and its smaller computational and time requirements. Exponential and Weibull models were completely rejected due to the continuous over- and underestimation of the survival probability reported. Results here highlighted the flexibility of parametric models with time-dependent effects, achieving a fit comparable to nonparametric models.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Marriott, Lorrae; Zinaman, Michael; Abrams, Keith R; Crowther, Michael J; Johnson, Sarah
2017-09-01
Background Human chorionic gonadotrophin is a marker of early pregnancy. This study sought to determine the possibility of being able to distinguish between healthy and failing pregnancies by utilizing patient-associated risk factors and daily urinary human chorionic gonadotrophin concentrations. Methods Data were from a study that collected daily early morning urine samples from women trying to conceive (n = 1505); 250 of whom became pregnant. Data from 129 women who became pregnant (including 44 miscarriages) were included in these analyses. A longitudinal model was used to profile human chorionic gonadotrophin, a Cox proportional hazards model to assess demographic/menstrual history data on the time to failed pregnancy, and a two-stage model to combine these two models. Results The profile for log human chorionic gonadotrophin concentrations in women suffering miscarriage differs to that of viable pregnancies; rate of human chorionic gonadotrophin rise is slower in those suffering a biochemical loss (loss before six weeks, recognized by a rise and fall of human chorionic gonadotrophin) and tends to plateau at a lower log human chorionic gonadotrophin in women suffering an early miscarriage (loss six weeks or later), compared with viable pregnancies. Maternal age, longest cycle length and time from luteinizing hormone surge to human chorionic gonadotrophin reaching 25 mIU/mL were found to be significantly associated with miscarriage risk. The two-stage model found that for an increase of one day in the time from luteinizing hormone surge to human chorionic gonadotrophin reaching 25 mIU/mL, there is a 30% increase in miscarriage risk (hazard ratio: 1.30; 95% confidence interval: 1.04, 1.62). Conclusion Rise of human chorionic gonadotrophin in early pregnancy could be useful to predict pregnancy viability. Daily tracking of urinary human chorionic gonadotrophin may enable early identification of some pregnancies at risk of miscarriage.
Lachin, John M
2013-11-10
I describe general expressions for the evaluation of sample size and power for the K group Mantel-logrank test or the Cox proportional hazards (PH) model score test. Under an exponential model, the method of Lachin and Foulkes for the 2 group case is extended to the K ⩾2 group case using the non-centrality parameter of the K - 1 df chi-square test. I also show similar results to apply to the K group score test in a Cox PH model. Lachin and Foulkes employed a truncated exponential distribution to provide for a non-linear rate of enrollment. I present expressions for the mean time of enrollment and the expected follow-up time in the presence of exponential losses to follow-up. When used with the expression for the noncentrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with r years of recruitment and T years total duration. I also describe sample size and power for a stratified-adjusted K group test and for the assessment of a group by stratum interaction. Similarly, I describe computations for a stratified-adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. Copyright © 2013 John Wiley & Sons, Ltd.
Unger, J B; Chen, X
1999-01-01
The increasing prevalence of adolescent smoking demonstrates the need to identify factors associated with early smoking initiation. Previous studies have shown that smoking by social network members and receptivity to pro-tobacco marketing are associated with smoking among adolescents. It is not clear, however, whether these variables also are associated with the age of smoking initiation. Using data from 10,030 California adolescents, this study identified significant correlates of age of smoking initiation using bivariate methods and a multivariate proportional hazards model. Age of smoking initiation was earlier among those adolescents whose friends, siblings, or parents were smokers, and among those adolescents who had a favorite tobacco advertisement, had received tobacco promotional items, or would be willing to use tobacco promotional items. Results suggest that the smoking behavior of social network members and pro-tobacco media influences are important determinants of age of smoking initiation. Because early smoking initiation is associated with higher levels of addiction in adulthood, tobacco control programs should attempt to counter these influences.
Testing proportionality in the proportional odds model fitted with GEE.
Stiger, T R; Barnhart, H X; Williamson, J M
1999-06-15
Generalized estimating equations (GEE) methodology as proposed by Liang and Zeger has received widespread use in the analysis of correlated binary data. Miller et al. and Lipsitz et al. extended GEE to correlated nominal and ordinal categorical data; in particular, they used GEE for fitting McCullagh's proportional odds model. In this paper, we consider robust (that is, empirically corrected) and model-based versions of both a score test and a Wald test for assessing the assumption of proportional odds in the proportional odds model fitted with GEE. The Wald test is based on fitting separate multiple logistic regression models for each dichotomization of the response variable, whereas the score test requires fitting just the proportional odds model. We evaluate the proposed tests in small to moderate samples by simulating data from a series of simple models. We illustrate the use of the tests on three data sets from medical studies.
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
Ethnicity, education, and the non-proportional hazard of first marriage in Turkey.
Gore, DeAnna L; Carlson, Elwood
2010-07-01
This study uses the 1998 Turkish Demographic and Health Survey to estimate non-proportional piecewise-constant hazards for first marriage among women in Turkey by education and ethnicity, with controls for region of residence and rural-urban migration. At low education levels Kurdish speakers married earlier than women who spoke Turkish or other languages, but at high education levels Kurdish women delayed marriage more than other women. This reversal across education groups furnishes a new illustration of the minority-group-status hypothesis specifically focused on marriage as the first step in the family formation process. The ethnic contrast concerned only marriage timing in Turkey, not proportions ever marrying. Eventual marriage remained nearly universal for all groups of women. This means that an assumption of proportional duration hazards (widespread in contemporary research) across the whole range of marriage-forming ages should be replaced by models with non-proportional duration hazards.
Estimating proportions of materials using mixture models
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Basu, R.
1983-01-01
An approach to proportion estimation based on the notion of a mixture model, appropriate parametric forms for a mixture model that appears to fit observed remotely sensed data, methods for estimating the parameters in these models, methods for labelling proportion determination from the mixture model, and methods which use the mixture model estimates as auxiliary variable values in some proportion estimation schemes are addressed.
PSHREG: a SAS macro for proportional and nonproportional subdistribution hazards regression.
Kohl, Maria; Plischke, Max; Leffondré, Karen; Heinze, Georg
2015-02-01
We present a new SAS macro %pshreg that can be used to fit a proportional subdistribution hazards model for survival data subject to competing risks. Our macro first modifies the input data set appropriately and then applies SAS's standard Cox regression procedure, PROC PHREG, using weights and counting-process style of specifying survival times to the modified data set. The modified data set can also be used to estimate cumulative incidence curves for the event of interest. The application of PROC PHREG has several advantages, e.g., it directly enables the user to apply the Firth correction, which has been proposed as a solution to the problem of undefined (infinite) maximum likelihood estimates in Cox regression, frequently encountered in small sample analyses. Deviation from proportional subdistribution hazards can be detected by both inspecting Schoenfeld-type residuals and testing correlation of these residuals with time, or by including interactions of covariates with functions of time. We illustrate application of these extended methods for competing risk regression using our macro, which is freely available at: http://cemsiis.meduniwien.ac.at/en/kb/science-research/software/statistical-software/pshreg, by means of analysis of a real chronic kidney disease study. We discuss differences in features and capabilities of %pshreg and the recent (January 2014) SAS PROC PHREG implementation of proportional subdistribution hazards modelling.
PSHREG: A SAS macro for proportional and nonproportional subdistribution hazards regression
Kohl, Maria; Plischke, Max; Leffondré, Karen; Heinze, Georg
2015-01-01
We present a new SAS macro %pshreg that can be used to fit a proportional subdistribution hazards model for survival data subject to competing risks. Our macro first modifies the input data set appropriately and then applies SAS's standard Cox regression procedure, PROC PHREG, using weights and counting-process style of specifying survival times to the modified data set. The modified data set can also be used to estimate cumulative incidence curves for the event of interest. The application of PROC PHREG has several advantages, e.g., it directly enables the user to apply the Firth correction, which has been proposed as a solution to the problem of undefined (infinite) maximum likelihood estimates in Cox regression, frequently encountered in small sample analyses. Deviation from proportional subdistribution hazards can be detected by both inspecting Schoenfeld-type residuals and testing correlation of these residuals with time, or by including interactions of covariates with functions of time. We illustrate application of these extended methods for competing risk regression using our macro, which is freely available at: http://cemsiis.meduniwien.ac.at/en/kb/science-research/software/statistical-software/pshreg, by means of analysis of a real chronic kidney disease study. We discuss differences in features and capabilities of %pshreg and the recent (January 2014) SAS PROC PHREG implementation of proportional subdistribution hazards modelling. PMID:25572709
Boher, Jean-Marie; Filleron, Thomas; Giorgi, Roch; Kramar, Andrew; Cook, Richard J
2017-01-30
Recently goodness-of-fit tests have been proposed for checking the proportional subdistribution hazards assumptions in the Fine and Gray regression model. Zhou, Fine, and Laird proposed weighted Schoenfeld-type residuals tests derived under an assumed model with specific form of time-varying regression coefficients. Li, Sheike, and Zhang proposed an omnibus test based on cumulative sums of Schoenfeld-type residuals. In this article, we extend the class of weighted residuals tests by allowing random weights of Schoenfeld-type residuals at ordered event times. In particular, it is demonstrated that weighted residuals tests using monotone weight functions of time are consistent against monotone proportional subdistribution hazards assumptions. Extensive Monte Carlo studies were conducted to evaluate the finite-sample performance of recent goodness-of-fit tests. Results from simulation studies show that weighted residuals tests using monotone random weight functions commonly used in non-proportional hazards regression settings tend to be more powerful for detecting monotone departures than other goodness-of-fit tests assuming no specific time-varying effect or misspecified time-varying effects. Two examples using real data are provided for illustrations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Lachin, John M.
2013-01-01
Summary General expressions are described for the evaluation of sample size and power for the K group Mantel-logrank test or the Cox PH model score test. Under an exponential model, the method of Lachin and Foulkes [1] for the 2 group case is extended to the K ≥ 2 group case using the non-centrality parameter of the K – 1 df chi-square test. Similar results are also shown to apply to the K group score test in a Cox PH model. Lachin and Foulkes [1] employed a truncated exponential distribution to provide for a non-linear rate of enrollment. Expressions for the mean time of enrollment and the expected follow-up time in the presence of exponential losses-to-follow-up are presented. When used with the expression for the non-centrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with R years of recruitment and T years total duration. Sample size and power are also described for a stratified-adjusted K group test and for the assessment of a group by stratum interaction. Similarly computations are described for a stratified-adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. PMID:23670965
Roldan-Valadez, Ernesto; Rios, Camilo; Motola-Kuba, Daniel; Matus-Santos, Juan; Villa, Antonio R; Moreno-Jimenez, Sergio
2016-11-01
A long-lasting concern has prevailed for the identification of predictive biomarkers for high-grade gliomas (HGGs) using MRI. However, a consensus of which imaging parameters assemble a significant survival model is still missing in the literature; we investigated the significant positive or negative contribution of several MR biomarkers in this tumour prognosis. A retrospective cohort of supratentorial HGGs [11 glioblastoma multiforme (GBM) and 17 anaplastic astrocytomas] included 28 patients (9 females and 19 males, respectively, with a mean age of 50.4 years, standard deviation: 16.28 years; range: 13-85 years). Oedema and viable tumour measurements were acquired using regions of interest in T1 weighted, T2 weighted, fluid-attenuated inversion recovery, apparent diffusion coefficient (ADC) and MR spectroscopy (MRS). We calculated Kaplan-Meier curves and obtained Cox's proportional hazards. During the follow-up period (3-98 months), 17 deaths were recorded. The median survival time was 1.73 years (range, 0.287-8.947 years). Only 3 out of 20 covariates (choline-to-N-acetyl aspartate and lipids-lactate-to-creatine ratios and age) showed significance in explaining the variability in the survival hazards model; score test: χ(2) (3) = 9.098, p = 0.028. MRS metabolites overcome volumetric parameters of peritumoral oedema and viable tumour, as well as tumour region ADC measurements. Specific MRS ratios (Cho/Naa, L-L/Cr) might be considered in a regular follow-up for these tumours. Advances in knowledge: Cho/Naa ratio is the strongest survival predictor with a log-hazard function of 2.672 in GBM. Low levels of lipids-lactate/Cr ratio represent up to a 41.6% reduction in the risk of death in GBM.
Crossing Hazard Functions in Common Survival Models.
Zhang, Jiajia; Peng, Yingwei
2009-10-15
Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.
Vadeby, Anna; Forsman, Asa; Kecklund, Göran; Akerstedt, Torbjörn; Sandberg, David; Anund, Anna
2010-05-01
Cox proportional hazard models were used to study relationships between the event that a driver is leaving the lane caused by sleepiness and different indicators of sleepiness. In order to elucidate different indicators' performance, five different models developed by Cox proportional hazard on a data set from a simulator study were used. The models consisted of physiological indicators and indicators from driving data both as stand alone and in combination. The different models were compared on two different data sets by means of sensitivity and specificity and the models' ability to predict lane departure was studied. In conclusion, a combination of blink indicators based on the ratio between blink amplitude and peak closing velocity of eyelid (A/PCV) (or blink amplitude and peak opening velocity of eyelid (A/POV)), standard deviation of lateral position and standard deviation of lateral acceleration relative road (ddy) was the most sensitive approach with sensitivity 0.80. This is also supported by the fact that driving data only shows the impairment of driving performance while blink data have a closer relation to sleepiness. Thus, an effective sleepiness warning system may be based on a combination of lane variability measures and variables related to eye movements (particularly slow eye closure) in order to have both high sensitivity (many correct warnings) and acceptable specificity (few false alarms). Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Royston, Patrick; Parmar, Mahesh K B
2014-08-07
Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model
Crager, Michael R; Tang, Gong
We propose a method for assessing an individual patient's risk of a future clinical event using clinical trial or cohort data and Cox proportional hazards regression, combining the information from several studies using meta-analysis techniques. The method combines patient-specific estimates of the log cumulative hazard across studies, weighting by the relative precision of the estimates, using either fixed- or random-effects meta-analysis calculations. Risk assessment can be done for any future patient using a few key summary statistics determined once and for all from each study. Generalizations of the method to logistic regression and linear models are immediate. We evaluate the methods using simulation studies and illustrate their application using real data.
Estimating Regression Parameters in an Extended Proportional Odds Model
Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping
2012-01-01
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583
Progress in studying scintillator proportionality: Phenomenological model
Bizarri, Gregory; Cherepy, Nerine; Choong, Woon-Seng; Hull, Giulia; Moses, William; Payne, Sephen; Singh, Jai; Valentine, John; Vasilev, Andrey; Williams, Richard
2009-04-30
We present a model to describe the origin of non-proportional dependence of scintillator light yield on the energy of an ionizing particle. The non-proportionality is discussed in terms of energy relaxation channels and their linear and non-linear dependences on the deposited energy. In this approach, the scintillation response is described as a function of the deposited energy deposition and the kinetic rates of each relaxation channel. This mathematical framework allows both a qualitative interpretation and a quantitative fitting representation of scintillation non-proportionality response as function of kinetic rates. This method was successfully applied to thallium doped sodium iodide measured with SLYNCI, a new facility using the Compton coincidence technique. Finally, attention is given to the physical meaning of the dominant relaxation channels, and to the potential causes responsible for the scintillation non-proportionality. We find that thallium doped sodium iodide behaves as if non-proportionality is due to competition between radiative recombinations and non-radiative Auger processes.
NASA CONNECT: Proportionality: Modeling the Future
NASA Technical Reports Server (NTRS)
2000-01-01
'Proportionality: Modeling the Future' is the sixth of seven programs in the 1999-2000 NASA CONNECT series. Produced by NASA Langley Research Center's Office of Education, NASA CONNECT is an award-winning series of instructional programs designed to enhance the teaching of math, science and technology concepts in grades 5-8. NASA CONNECT establishes the 'connection' between the mathematics, science, and technology concepts taught in the classroom and NASA research. Each program in the series supports the national mathematics, science, and technology standards; includes a resource-rich teacher guide; and uses a classroom experiment and web-based activity to complement and enhance the math, science, and technology concepts presented in the program. NASA CONNECT is FREE and the programs in the series are in the public domain. Visit our web site and register. http://connect.larc.nasa.gov 'Proportionality: Modeling the Future', students will examine how patterns, measurement, ratios, and proportions are used in the research, development, and production of airplanes.
Boron-10 Lined Proportional Counter Model Validation
Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.
2012-06-30
The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.
Kang, Suhyun; Lu, Wenbin; Song, Rui
2017-08-08
In this paper, we propose a testing procedure for detecting and estimating the subgroup with an enhanced treatment effect in survival data analysis. Here, we consider a new proportional hazard model that includes a nonparametric component for the covariate effect in the control group and a subgroup-treatment-interaction effect defined by a change plane. We develop a score-type test for detecting the existence of the subgroup, which is doubly robust against misspecification of the baseline effect model or the propensity score but not both under mild assumptions for censoring. When the null hypothesis of no subgroup is rejected, the change-plane parameters that define the subgroup can be estimated on the basis of supremum of the normalized score statistic. The asymptotic distributions of the proposed test statistic under the null and local alternative hypotheses are established. On the basis of established asymptotic distributions, we further propose a sample size calculation formula for detecting a given subgroup effect and derive a numerical algorithm for implementing the sample size calculation in clinical trial designs. The performance of the proposed approach is evaluated by simulation studies. An application to an AIDS clinical trial data is also given for illustration. Copyright © 2017 John Wiley & Sons, Ltd.
Identifying and modeling safety hazards
DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.
2000-03-29
The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.
Soh, Chang-Heok; Harrington, David P; Zaslavsky, Alan M
2008-03-01
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.
Mathematically modelling proportions of Japanese populations by industry
NASA Astrophysics Data System (ADS)
Hirata, Yoshito
2016-10-01
I propose a mathematical model for temporal changes of proportions for industrial sectors. I prove that the model keeps the proportions for the primary, the secondary, and the tertiary sectors between 0 and 100% and preserves their total as 100%. The model fits the Japanese historical data between 1950 and 2005 for the population proportions by industry very well. The model also predicts that the proportion for the secondary industry becomes negligible and becomes less than 1% at least around 2080.
Villegas, Rodrigo; Julià, Olga; Ocaña, Jordi
2013-07-24
In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox's proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
2013-01-01
Background In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox’s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically. PMID:23883000
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Welfare Returns and Temporary Time Limits: A Proportional Hazard Model
ERIC Educational Resources Information Center
Albert, Vicky N.; King, William C.; Iaci, Ross
2007-01-01
This study analyzes welfare returns for families who leave welfare for a "sit-out" period of 12 months in response to a temporary time limit requirement in Nevada. Findings reveal that relatively few families return for cash assistance after sitting out and that the majority who do return soon after their sit-out period is complete.…
Welfare Returns and Temporary Time Limits: A Proportional Hazard Model
ERIC Educational Resources Information Center
Albert, Vicky N.; King, William C.; Iaci, Ross
2007-01-01
This study analyzes welfare returns for families who leave welfare for a "sit-out" period of 12 months in response to a temporary time limit requirement in Nevada. Findings reveal that relatively few families return for cash assistance after sitting out and that the majority who do return soon after their sit-out period is complete.…
[Regression models for variables expressed as a continuous proportion].
Salinas-Rodríguez, Aarón; Pérez-Núñez, Ricardo; Avila-Burgos, Leticia
2006-01-01
To describe some of the statistical alternatives available for studying continuous proportions and to compare them in order to show their advantages and disadvantages by means of their application in a practical example of the Public Health field. From the National Reproductive Health Survey performed in 2003, the proportion of individual coverage in the family planning program--proposed in one study carried out in the National Institute of Public Health in Cuernavaca, Morelos, Mexico (2005)--was modeled using the Normal, Gamma, Beta and quasi-likelihood regression models. The Akaike Information Criterion (AIC) proposed by McQuarrie and Tsai was used to define the best model.Then, using a simulation (Monte Carlo/Markov Chains approach) a variable with a Beta distribution was generated to evaluate the behavior of the 4 models while varying the sample size from 100 to 18,000 observations. Results showed that the best statistical option for the analysis of continuous proportions was the Beta regression model, since its assumptions are easily accomplished and because it had the lowest AIC value. Simulation evidenced that while the sample size increases the Gamma, and even more so the quasi-likelihood, models come significantly close to the Beta regression model. The use of parametric Beta regression is highly recommended to model continuous proportions and the normal model should be avoided. If the sample size is large enough,the use of quasi-likelihood model represents a good alternative.
Models of volcanic eruption hazards
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
Models of volcanic eruption hazards
Wohletz, K.H.
1992-06-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
NASA Technical Reports Server (NTRS)
Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.
1998-01-01
New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.
Modeling lahar behavior and hazards
Manville, Vernon; Major, Jon J.; Fagents, Sarah A.
2013-01-01
Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.
Royston, Patrick; Parmar, Mahesh K B
2016-02-11
Most randomized controlled trials with a time-to-event outcome are designed assuming proportional hazards (PH) of the treatment effect. The sample size calculation is based on a logrank test. However, non-proportional hazards are increasingly common. At analysis, the estimated hazards ratio with a confidence interval is usually presented. The estimate is often obtained from a Cox PH model with treatment as a covariate. If non-proportional hazards are present, the logrank and equivalent Cox tests may lose power. To safeguard power, we previously suggested a 'joint test' combining the Cox test with a test of non-proportional hazards. Unfortunately, a larger sample size is needed to preserve power under PH. Here, we describe a novel test that unites the Cox test with a permutation test based on restricted mean survival time. We propose a combined hypothesis test based on a permutation test of the difference in restricted mean survival time across time. The test involves the minimum of the Cox and permutation test P-values. We approximate its null distribution and correct it for correlation between the two P-values. Using extensive simulations, we assess the type 1 error and power of the combined test under several scenarios and compare with other tests. We investigate powering a trial using the combined test. The type 1 error of the combined test is close to nominal. Power under proportional hazards is slightly lower than for the Cox test. Enhanced power is available when the treatment difference shows an 'early effect', an initial separation of survival curves which diminishes over time. The power is reduced under a 'late effect', when little or no difference in survival curves is seen for an initial period and then a late separation occurs. We propose a method of powering a trial using the combined test. The 'insurance premium' offered by the combined test to safeguard power under non-PH represents about a single-digit percentage increase in sample size. The
Modeling multivariate survival data by a semiparametric random effects proportional odds model.
Lam, K F; Lee, Y W; Leung, T L
2002-06-01
In this article, the focus is on the analysis of multivariate survival time data with various types of dependence structures. Examples of multivariate survival data include clustered data and repeated measurements from the same subject, such as the interrecurrence times of cancer tumors. A random effect semiparametric proportional odds model is proposed as an alternative to the proportional hazards model. The distribution of the random effects is assumed to be multivariate normal and the random effect is assumed to act additively to the baseline log-odds function. This class of models, which includes the usual shared random effects model, the additive variance components model, and the dynamic random effects model as special cases, is highly flexible and is capable of modeling a wide range of multivariate survival data. A unified estimation procedure is proposed to estimate the regression and dependence parameters simultaneously by means of a marginal-likelihood approach. Unlike the fully parametric case, the regression parameter estimate is not sensitive to the choice of correlation structure of the random effects. The marginal likelihood is approximated by the Monte Carlo method. Simulation studies are carried out to investigate the performance of the proposed method. The proposed method is applied to two well-known data sets, including clustered data and recurrent event times data.
Gao, Xiaoming; Schwartz, Todd A; Preisser, John S; Perin, Jamie
2017-10-01
A SAS macro, GEEORD, has been developed for the analysis of ordinal responses with repeated measures through a regression model that flexibly allows the proportional odds assumption to apply (or not) separately for each explanatory variable. Previously utilized in an analysis of a longitudinal orthognathic surgery clinical trial by Preisser et al. [1,2], the basis of GEEORD is the generalized estimating equations (GEE) method for cumulative logits models described by Lipsitz et al. [3]. The macro extends the capabilities for modeling correlated ordinal data of GEECAT, a SAS macro that allows the user to model correlated categorical response data [4]. The macro applies to independent ordinal responses as a special case. Examples are provided to demonstrate the convenient application of GEEORD to two different datasets. The macro's features are illustrated in fitting models to ordinal response variables in univariate and repeated measures settings; this includes the capacity to fit the non-proportional odds model, the partial proportional odds model, and the proportional odds model. The macro additionally provides relevant tests of the proportional odds assumption. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1996-01-01
In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.
High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis.
Mittal, Sushil; Madigan, David; Burd, Randall S; Suchard, Marc A
2014-04-01
Survival analysis endures as an old, yet active research field with applications that spread across many domains. Continuing improvements in data acquisition techniques pose constant challenges in applying existing survival analysis methods to these emerging data sets. In this paper, we present tools for fitting regularized Cox survival analysis models on high-dimensional, massive sample-size (HDMSS) data using a variant of the cyclic coordinate descent optimization technique tailored for the sparsity that HDMSS data often present. Experiments on two real data examples demonstrate that efficient analyses of HDMSS data using these tools result in improved predictive performance and calibration.
Lau, Bryan; Cole, Stephen R.; Gange, Stephen J.
2010-01-01
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event. PMID:21337360
Loeys, T; Goetghebeur, E
2003-03-01
Survival data from randomized trials are most often analyzed in a proportional hazards (PH) framework that follows the intention-to-treat (ITT) principle. When not all the patients on the experimental arm actually receive the assigned treatment, the ITT-estimator mixes its effect on treatment compliers with its absence of effect on noncompliers. The structural accelerated failure time (SAFT) models of Robins and Tsiatis are designed to consistently estimate causal effects on the treated, without direct assumptions about the compliance selection mechanism. The traditional PH-model, however, has not yet led to such causal interpretation. In this article, we examine a PH-model of treatment effect on the treated subgroup. While potential treatment compliance is unobserved in the control arm, we derive an estimating equation for the Compliers PROPortional Hazards Effect of Treatment (C-PROPHET). The jackknife is used for bias correction and variance estimation. The method is applied to data from a recently finished clinical trial in cancer patients with liver metastases.
A Class of Semiparametric Transformation Models for Survival Data with a Cured Proportion
Choi, Sangbum; Huang, Xuelin; Chen, Yi-Hau
2013-01-01
We propose a new class of semiparametric regression models based on a multiplicative frailty assumption with a discrete frailty, which may account for cured subgroup in population. The cure model framework is then recast as a problem with a transformation model. The proposed models can explain a broad range of nonproportional hazards structures along with a cured proportion. An efficient and simple algorithm based on the martingale process is developed to locate the nonparametric maximum likelihood estimator. Unlike existing expectation-maximization based methods, our approach directly maximizes a nonparametric likelihood function, and the calculation of consistent variance estimates is immediate. The proposed method is useful for resolving identifiability features embedded in semiparametric cure models. Simulation studies are presented to demonstrate the finite sample properties of the proposed method. A case study of stage III soft-tissue sarcoma is given as an illustration. PMID:23760878
Space Particle Hazard Measurement and Modeling
2016-09-01
AFRL-RV-PS- AFRL-RV-PS- TR-2016-0120 TR-2016-0120 SPACE PARTICLE HAZARD MEASUREMENT AND MODELING Adrian Wheelock 01 September 2016 Final Report...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave SE AIR FORCE...AND SUBTITLE Space Particle Hazard Measurement and Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62601F 6. AUTHOR(S
Modelling boron-lined proportional counter response to neutrons.
Shahri, A; Ghal-Eh, N; Etaati, G R
2013-09-01
The detailed Monte Carlo simulation of a boron-lined proportional counter response to a neutron source has been presented. The MCNP4C and experimental data on different source-moderator geometries have been given for comparison. The influence of different irradiation geometries and boron-lining thicknesses on the detector response has been studied.
Statistical modeling of landslide hazard using GIS
Peter V. Gorsevski; Randy B. Foltz; Paul E. Gessler; Terrance W. Cundy
2001-01-01
A model for spatial prediction of landslide hazard was applied to a watershed affected by landslide events that occurred during the winter of 1995-96, following heavy rains, and snowmelt. Digital elevation data with 22.86 m x 22.86 m resolution was used for deriving topographic attributes used for modeling. The model is based on the combination of logistic regression...
The saltus model applied to proportional reasoning data.
Draney, Karen
2007-01-01
This chapter examines an application of the saltus model, a mixture model that was designed for the analysis of developmental data. Some background in the types of research for which such a model might be useful is discussed. The equations defining the model are given, as well as the model's relationship to the Rasch model and to other mixture models. An application of the saltus model to an example data set, collected using Noelting's orange juice mixtures tasks, is examined in detail, along with the control files necessary to run the software, and the output file it produced.
Induced Smoothing for the Semiparametric Accelerated Hazards Model
Li, Haifen; Zhang, Jiajia; Tang, Yincai
2012-01-01
Compared to the proportional hazards model and accelerated failure time model, the accelerated hazards model has a unique property in its application, in that it can allow gradual effects of the treatment. However, its application is still very limited, partly due to the complexity of existing semiparametric estimation methods. We propose a new semiparametric estimation method based on the induced smoothing and rank type estimates. The parameter estimates and their variances can be easily obtained from the smoothed estimating equation; thus it is easy to use in practice. Our numerical study shows that the new method is more efficient than the existing methods with respect to its variance estimation and coverage probability. The proposed method is employed to reanalyze a data set from a brain tumor treatment study. PMID:23049151
Examining Proportional Representation of Ethnic Groups within the SWPBIS Model
ERIC Educational Resources Information Center
Jewell, Kelly
2012-01-01
The quantitative study seeks to analyze if School-wide Positive Behavior Intervention and Support (SWPBIS) model reduces the likelihood that minority students will receive more individualized supports due to behavior problems. In theory, the SWPBIS model should reflect a 3-tier system with tier 1 representing approximately 80%, tier 2 representing…
Virtual Research Environments for Natural Hazard Modelling
NASA Astrophysics Data System (ADS)
Napier, Hazel; Aldridge, Tim
2017-04-01
The Natural Hazards Partnership (NHP) is a group of 17 collaborating public sector organisations providing a mechanism for co-ordinated advice to government and agencies responsible for civil contingency and emergency response during natural hazard events. The NHP has set up a Hazard Impact Model (HIM) group tasked with modelling the impact of a range of UK hazards with the aim of delivery of consistent hazard and impact information. The HIM group consists of 7 partners initially concentrating on modelling the socio-economic impact of 3 key hazards - surface water flooding, land instability and high winds. HIM group partners share scientific expertise and data within their specific areas of interest including hydrological modelling, meteorology, engineering geology, GIS, data delivery, and modelling of socio-economic impacts. Activity within the NHP relies on effective collaboration between partners distributed across the UK. The NHP are acting as a use case study for a new Virtual Research Environment (VRE) being developed by the EVER-EST project (European Virtual Environment for Research - Earth Science Themes: a solution). The VRE is allowing the NHP to explore novel ways of cooperation including improved capabilities for e-collaboration, e-research, automation of processes and e-learning. Collaboration tools are complemented by the adoption of Research Objects, semantically rich aggregations of resources enabling the creation of uniquely identified digital artefacts resulting in reusable science and research. Application of the Research Object concept to HIM development facilitates collaboration, by encapsulating scientific knowledge in a shareable format that can be easily shared and used by partners working on the same model but within their areas of expertise. This paper describes the application of the VRE to the NHP use case study. It outlines the challenges associated with distributed partnership working and how they are being addressed in the VRE. A case
Model building in nonproportional hazard regression.
Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad
2013-12-30
Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Experimental Concepts for Testing Seismic Hazard Models
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Jordan, T. H.
2015-12-01
Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.
Madadizadeh, Farzan; Ghanbarnejad, Amin; Ghavami, Vahid; Bandamiri, Mohammad Zare; Mohammadianpanah, Mohammad
2017-01-01
Introduction: Colorectal cancer (CRC) is a commonly fatal cancer that ranks as third worldwide and third and the fifth in Iranian women and men, respectively. There are several methods for analyzing time to event data. Additive hazards regression models take priority over the popular Cox proportional hazards model if the absolute hazard (risk) change instead of hazard ratio is of primary concern, or a proportionality assumption is not made. Methods: This study used data gathered from medical records of 561 colorectal cancer patients who were admitted to Namazi Hospital, Shiraz, Iran, during 2005 to 2010 and followed until December 2015. The nonparametric Aalen’s additive hazards model, semiparametric Lin and Ying’s additive hazards model and Cox proportional hazards model were applied for data analysis. The proportionality assumption for the Cox model was evaluated with a test based on the Schoenfeld residuals and for test goodness of fit in additive models, Cox-Snell residual plots were used. Analyses were performed with SAS 9.2 and R3.2 software. Results: The median follow-up time was 49 months. The five-year survival rate and the mean survival time after cancer diagnosis were 59.6% and 68.1±1.4 months, respectively. Multivariate analyses using Lin and Ying’s additive model and the Cox proportional model indicated that the age of diagnosis, site of tumor, stage, and proportion of positive lymph nodes, lymphovascular invasion and type of treatment were factors affecting survival of the CRC patients. Conclusion: Additive models are suitable alternatives to the Cox proportionality model if there is interest in evaluation of absolute hazard change, or no proportionality assumption is made. PMID:28547944
Ni, Ai; Cai, Jianwen
2017-07-28
Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.
Application of a hazard-based visual predictive check to evaluate parametric hazard models.
Huh, Yeamin; Hutmacher, Matthew M
2016-02-01
Parametric models used in time to event analyses are evaluated typically by survival-based visual predictive checks (VPC). Kaplan-Meier survival curves for the observed data are compared with those estimated using model-simulated data. Because the derivative of the log of the survival curve is related to the hazard--the typical quantity modeled in parametric analysis--isolation, interpretation and correction of deficiencies in the hazard model determined by inspection of survival-based VPC's is indirect and thus more difficult. The purpose of this study is to assess the performance of nonparametric hazard estimators of hazard functions to evaluate their viability as VPC diagnostics. Histogram-based and kernel-smoothing estimators were evaluated in terms of bias of estimating the hazard for Weibull and bathtub-shape hazard scenarios. After the evaluation of bias, these nonparametric estimators were assessed as a method for VPC evaluation of the hazard model. The results showed that nonparametric hazard estimators performed reasonably at the sample sizes studied with greater bias near the boundaries (time equal to 0 and last observation) as expected. Flexible bandwidth and boundary correction methods reduced these biases. All the nonparametric estimators indicated a misfit of the Weibull model when the true hazard was a bathtub shape. Overall, hazard-based VPC plots enabled more direct interpretation of the VPC results compared to survival-based VPC plots.
Flexible parametric modelling of cause-specific hazards to estimate cumulative incidence functions
2013-01-01
Background Competing risks are a common occurrence in survival analysis. They arise when a patient is at risk of more than one mutually exclusive event, such as death from different causes, and the occurrence of one of these may prevent any other event from ever happening. Methods There are two main approaches to modelling competing risks: the first is to model the cause-specific hazards and transform these to the cumulative incidence function; the second is to model directly on a transformation of the cumulative incidence function. We focus on the first approach in this paper. This paper advocates the use of the flexible parametric survival model in this competing risk framework. Results An illustrative example on the survival of breast cancer patients has shown that the flexible parametric proportional hazards model has almost perfect agreement with the Cox proportional hazards model. However, the large epidemiological data set used here shows clear evidence of non-proportional hazards. The flexible parametric model is able to adequately account for these through the incorporation of time-dependent effects. Conclusion A key advantage of using this approach is that smooth estimates of both the cause-specific hazard rates and the cumulative incidence functions can be obtained. It is also relatively easy to incorporate time-dependent effects which are commonly seen in epidemiological studies. PMID:23384310
ERIC Educational Resources Information Center
Misailadou, Christina; Williams, Julian
2003-01-01
We report a study of 10-14 year old children's use of additive strategies while solving ratio and proportion tasks. Rasch methodology was used to develop a diagnostic instrument that reveals children's misconceptions. Two versions of this instrument, one with "models" thought to facilitate proportional reasoning and one without were…
ERIC Educational Resources Information Center
Fujimura, Nobuyuki
2001-01-01
One hundred forty fourth graders were asked to solve proportion problems about juice-mixing situations both before and after an intervention that used a manipulative model or other materials in three experiments. Results indicate different approaches appear to be necessary to facilitate children's proportional reasoning, depending on the reasoning…
Yi, Grace Y; He, Wenqing
2012-05-01
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.
Kovalchik, Stephanie A; Varadhan, Ravi; Weiss, Carlos O
2013-12-10
Understanding how individuals vary in their response to treatment is an important task of clinical research. For standard regression models, a proportional interactions model first described by Follmann and Proschan (1999) offers a powerful approach for identifying effect modification in a randomized clinical trial when multiple variables influence treatment response. In this paper, we present a framework for using the proportional interactions model in the context of a parallel-arm clinical trial with multiple prespecified candidate effect modifiers. To protect against model misspecification, we propose a selection strategy that considers all possible proportional interactions models. We develop a modified Bonferroni correction for multiple testing that accounts for the positive correlation among candidate models. We describe methods for constructing a confidence interval for the proportionality parameter. In simulation studies, we show that our modified Bonferroni adjustment controls familywise error and has greater power to detect proportional interactions compared with multiplcity-corrected subgroup analyses. We demonstrate our methodology by using the Studies of Left Ventricular Dysfunction Treatment trial, a placebo-controlled randomized clinical trial of the efficacy of enalapril to reduce the risk of death or hospitalization in chronic heart failure patients. An R package called anoint is available for implementing the proportional interactions methodology.
The effect of different proportions of astragaloside and curcumin on DM model of mice.
Miao, Mingsan; Liu, Jing; Wang, Tan; Liang, Xue; Bai, Ming
2017-05-01
This paper aims to study the effects of different proportion of astragaloside and curcumin on STZ induced Diabetes Mellitus (DM) model of mice, and to select a better proportion of active components. Its ultimate purpose is to lay a basis for the follow-up research on astragaloside-curcumin capsule. Increase-decrease baseline geometric proportion design method and comprehensive performance evaluation utilised to study the effect of different proportion of astragaloside and curcumin on DM mice models, which have an intravenous tail injection of STZ. The proportions of the two components are 10:0, 8:2, 7:3, 6:4, 5:5, 4:6, 3:7, 2:8, 0:10 respectively. And we will screen out the optimal composition. Blood glycated serum protein (GSP), hepatic glycogen and insulin tested to observe pathological changes in the pancreas. The mice DM model was copied successfully. Compared with the model group, groups treated with the metformin and with different proportions of astragaloside and curcumin help lower the blood glucose levels and GSP levels, increase glycogen stores of model mice by different degrees, and avoid pathological changes of pancreas in the model mice. The ratio of 3:7 was selected as the optimal one, based on the comprehensive performance evaluation method, followed by the ratio of 4:6. The optimal proportion of DM models is 3:7, followed by 4:6. The ratio of total astragaloside and curcumin can lower blood glucose levels, GSP levels, promote the formation of glycogen, and improve the pathological changes of pancreas in the model mice.
ERIC Educational Resources Information Center
Liu, Xing
2008-01-01
The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…
Sasidharan, Lekshmi; Menéndez, Mónica
2014-11-01
The conventional methods for crash injury severity analyses include either treating the severity data as ordered (e.g. ordered logit/probit models) or non-ordered (e.g. multinomial models). The ordered models require the data to meet proportional odds assumption, according to which the predictors can only have the same effect on different levels of the dependent variable, which is often not the case with crash injury severities. On the other hand, non-ordered analyses completely ignore the inherent hierarchical nature of crash injury severities. Therefore, treating the crash severity data as either ordered or non-ordered results in violating some of the key principles. To address these concerns, this paper explores the application of a partial proportional odds (PPO) model to bridge the gap between ordered and non-ordered severity modeling frameworks. The PPO model allows the covariates that meet the proportional odds assumption to affect different crash severity levels with the same magnitude; whereas the covariates that do not meet the proportional odds assumption can have different effects on different severity levels. This study is based on a five-year (2008-2012) national pedestrian safety dataset for Switzerland. A comparison between the application of PPO models, ordered logit models, and multinomial logit models for pedestrian injury severity evaluation is also included here. The study shows that PPO models outperform the other models considered based on different evaluation criteria. Hence, it is a viable method for analyzing pedestrian crash injury severities.
Satellite image collection modeling for large area hazard emergency response
NASA Astrophysics Data System (ADS)
Liu, Shufan; Hodgson, Michael E.
2016-08-01
Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.
Hazardous gas model evaluation with field observations
NASA Astrophysics Data System (ADS)
Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.
Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas
NASA Astrophysics Data System (ADS)
Wright, Vince
2014-03-01
Pirie and Kieren (1989 For the learning of mathematics, 9(3)7-11, 1992 Journal of Mathematical Behavior, 11, 243-257, 1994a Educational Studies in Mathematics, 26, 61-86, 1994b For the Learning of Mathematics, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which learners develop their mathematical understanding. The model was adapted to create the teaching model used in the New Zealand Numeracy Development Projects (Ministry of Education, 2007). A case study of a 3-week sequence of instruction with a group of eight 12- and 13-year-old students provided the data. The teacher/researcher used folding back to materials and images and progressing from materials to imaging to number properties to assist students to develop their understanding of frequencies as proportions. The data show that successful implementation of the model is dependent on the teacher noticing and responding to the layers of understanding demonstrated by the students and the careful selection of materials, problems and situations. It supports the use of the model as a useful part of teachers' instructional strategies and the importance of pedagogical content knowledge to the quality of the way the model is used.
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
Natural hazard modeling and uncertainty analysis [Chapter 2
Matthew Thompson; Jord J. Warmink
2017-01-01
Modeling can play a critical role in assessing and mitigating risks posed by natural hazards. These modeling efforts generally aim to characterize the occurrence, intensity, and potential consequences of natural hazards. Uncertainties surrounding the modeling process can have important implications for the development, application, evaluation, and interpretation of...
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.
2010-01-01
Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636
Lahar Hazard Modeling at Tungurahua Volcano, Ecuador
NASA Astrophysics Data System (ADS)
Sorensen, O. E.; Rose, W. I.; Jaya, D.
2003-04-01
lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
2014-01-01
Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time. PMID:25530753
Modeling seismic hazard in the Lower Rhine Graben using a fault-based source model
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Vleminckx, Bart; Verbeeck, Koen; Camelbeeck, Thierry
2013-04-01
The Lower Rhine Graben (LRG) is an active tectonic structure in intraplate NW Europe. It is characterized by NW-SE oriented normal faults, and moderate but rather continuous seismic activity. Probabilistic seismic hazard assessments (PHSA) in this region have hitherto been based on area source models, in which the LRG is modeled as a single or a small number of seismotectonic zones, where the occurrence of earthquakes is assumed to be uniform. Hazard engines usually model earthquakes in area sources as point sources or finite ruptures in a horizontal plane at a fixed depth. The past few years, efforts have increasingly been directed to using fault sources in PSHA, in order to obtain more realistic patterns of ground motion. This requires an inventory of all fault sources, and definition of their physical properties (at least length, width, strike, dip, rake, slip rate, and maximum magnitude). The LRG is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults. In the frame of the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/), we have compiled the first parameterized fault model for the LRG that can be used in PSHA studies. We construct the magnitude-frequency distribution (MFD) of each fault from two contributions: 1) up to the largest observed magnitude (M=5.7), we use the MFD determined from the historical and instrumental earthquake catalog, weighted in proportion to the total moment rate, and 2) the frequency of the maximum earthquake predicted by the fault model. We consider the ground-motion prediction equations (GMPE) that were selected in the SHARE project for active shallow crust. This selection includes GMPE's with different distance metrics, the main difference being whether depth of rupture is taken into account or not. Seismic hazard is computed with OpenQuake (http://openquake.org/), an open-source hazard and risk engine that is developed in the frame of the Global
Regression model estimation of early season crop proportions: North Dakota, some preliminary results
NASA Technical Reports Server (NTRS)
Lin, K. K. (Principal Investigator)
1982-01-01
To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.
Coats, D.W.; Murray, R.C.
1985-08-01
Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. This report summarizes the final wind/tornado hazard models recommended for each site and the methodology used to develop these models. Final seismic hazard models have been published separately by TERA Corporation. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 19 refs., 3 figs., 9 tabs.
Modelling direct tangible damages due to natural hazards
NASA Astrophysics Data System (ADS)
Kreibich, H.; Bubeck, P.
2012-04-01
Europe has witnessed a significant increase in direct damages from natural hazards. A further damage increase is expected due to the on-going accumulation of people and economic assets in risk-prone areas and the effects of climate change, for instance, on the severity and frequency of drought events in the Mediterranean basin. In order to mitigate the impact of natural hazards an improved risk management based on reliable risk analysis is needed. Particularly, there is still much research effort needed to improve the modelling of damage due to natural hazards. In comparison with hazard modelling, simple approaches still dominate damage assessments, mainly due to limitations in available data and knowledge on damaging processes and influencing factors. Within the EU-project ConHaz, methods as well as data sources and terminology for damage assessments were compiled, systemized and analysed. Similarities and differences between the approaches concerning floods, alpine hazards, coastal hazards and droughts were identified. Approaches for significant improvements of direct tangible damage modelling with a particular focus on cross-hazard-learning will be presented. Examples from different hazards and countries will be given how to improve damage data bases, the understanding of damaging processes, damage models and how to conduct improvements via validations and uncertainty analyses.
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Physical vulnerability modelling in natural hazard risk assessment
NASA Astrophysics Data System (ADS)
Douglas, J.
2007-04-01
An evaluation of the risk to an exposed element from a hazardous event requires a consideration of the element's vulnerability, which expresses its propensity to suffer damage. This concept allows the assessed level of hazard to be translated to an estimated level of risk and is often used to evaluate the risk from earthquakes and cyclones. However, for other natural perils, such as mass movements, coastal erosion and volcanoes, the incorporation of vulnerability within risk assessment is not well established and consequently quantitative risk estimations are not often made. This impedes the study of the relative contributions from different hazards to the overall risk at a site. Physical vulnerability is poorly modelled for many reasons: the cause of human casualties (from the event itself rather than by building damage); lack of observational data on the hazard, the elements at risk and the induced damage; the complexity of the structural damage mechanisms; the temporal and geographical scales; and the ability to modify the hazard level. Many of these causes are related to the nature of the peril therefore for some hazards, such as coastal erosion, the benefits of considering an element's physical vulnerability may be limited. However, for hazards such as volcanoes and mass movements the modelling of vulnerability should be improved by, for example, following the efforts made in earthquake risk assessment. For example, additional observational data on induced building damage and the hazardous event should be routinely collected and correlated and also numerical modelling of building behaviour during a damaging event should be attempted.
Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-25
This study aims at contributing to the literature on pedestrian and bicyclist safety by building on the conventional count regression models to explore exogenous factors affecting pedestrian and bicyclist crashes at the macroscopic level. In the traditional count models, effects of exogenous factors on non-motorist crashes were investigated directly. However, the vulnerable road users' crashes are collisions between vehicles and non-motorists. Thus, the exogenous factors can affect the non-motorist crashes through the non-motorists and vehicle drivers. To accommodate for the potentially different impact of exogenous factors we convert the non-motorist crash counts as the product of total crash counts and proportion of non-motorist crashes and formulate a joint model of the negative binomial (NB) model and the logit model to deal with the two parts, respectively. The formulated joint model is estimated using non-motorist crash data based on the Traffic Analysis Districts (TADs) in Florida. Meanwhile, the traditional NB model is also estimated and compared with the joint model. The result indicates that the joint model provides better data fit and can identify more significant variables. Subsequently, a novel joint screening method is suggested based on the proposed model to identify hot zones for non-motorist crashes. The hot zones of non-motorist crashes are identified and divided into three types: hot zones with more dangerous driving environment only, hot zones with more hazardous walking and cycling conditions only, and hot zones with both. It is expected that the joint model and screening method can help decision makers, transportation officials, and community planners to make more efficient treatments to proactively improve pedestrian and bicyclist safety. Published by Elsevier Ltd.
Pineda, M; Weijer, C J; Eftimie, R
2015-04-07
Understanding the mechanisms that control tissue morphogenesis and homeostasis is a central goal not only in developmental biology but also has great relevance for our understanding of various diseases, including cancer. A model organism that is widely used to study the control of tissue morphogenesis and proportioning is the Dictyostelium discoideum. While there are mathematical models describing the role of chemotactic cell motility in the Dictyostelium assembly and morphogenesis of multicellular tissues, as well as models addressing possible mechanisms of proportion regulation, there are no models incorporating both these key aspects of development. In this paper, we introduce a 1D hyperbolic model to investigate the role of two morphogens, DIF and cAMP, on cell movement, cell sorting, cell-type differentiation and proportioning in Dictyostelium discoideum. First, we use the non-spatial version of the model to study cell-type transdifferentiation. We perform a steady-state analysis of it and show that, depending on the shape of the differentiation rate functions, multiple steady-state solutions may occur. Then we incorporate spatial dynamics into the model, and investigate the transdifferentiation and spatial positioning of cells inside the newly formed structures, following the removal of prestalk or prespore regions of a Dictyostelium slug. We show that in isolated prespore fragments, a tipped mound-like aggregate can be formed after a transdifferentiation from prespore to prestalk cells and following the sorting of prestalk cells to the centre of the aggregate. For isolated prestalk fragments, we show the formation of a slug-like structure containing the usual anterior-posterior pattern of prestalk and prespore cells.
a model based on crowsourcing for detecting natural hazards
NASA Astrophysics Data System (ADS)
Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.
2015-12-01
Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.
2015 USGS Seismic Hazard Model for Induced Seismicity
NASA Astrophysics Data System (ADS)
Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Hoover, S. M.; Ellsworth, W. L.; Llenos, A. L.; Michael, A. J.
2015-12-01
Over the past several years, the seismicity rate has increased markedly in multiple areas of the central U.S. Studies have tied the majority of this increased activity to wastewater injection in deep wells and hydrocarbon production. These earthquakes are induced by human activities that change rapidly based on economic and policy decisions, making them difficult to forecast. Our 2014 USGS National Seismic Hazard Model and previous models are intended to provide the long-term hazard (2% probability of exceedance in 50 years) and are based on seismicity rates and patterns observed mostly from tectonic earthquakes. However, potentially induced earthquakes were identified in 14 regions that were not included in the earthquake catalog used for constructing the 2014 model. We recognized the importance of considering these induced earthquakes in a separate hazard analysis, and as a result in April 2015 we released preliminary models that explored the impact of this induced seismicity on the hazard. Several factors are important in determining the hazard from induced seismicity: period of the catalog that optimally forecasts the next year's activity, earthquake magnitude-rate distribution, earthquake location statistics, maximum magnitude, ground motion models, and industrial drivers such as injection rates. The industrial drivers are not currently available in a form that we can implement in a 1-year model. Hazard model inputs have been evaluated by a broad group of scientists and engineers to assess the range of acceptable models. Results indicate that next year's hazard is significantly higher by more than a factor of three in Oklahoma, Texas, and Colorado compared to the long-term 2014 hazard model. These results have raised concern about the impacts of induced earthquakes on the built environment and have led to many engineering and policy discussions about how to mitigate these effects for the more than 7 million people that live near areas of induced seismicity.
Hybrid internal model control and proportional control of chaotic dynamical systems.
Qi, Dong-lian; Yao, Liang-bin
2004-01-01
A new chaos control method is proposed to take advantage of chaos or avoid it. The hybrid Internal Model Control and Proportional Control learning scheme are introduced. In order to gain the desired robust performance and ensure the system's stability, Adaptive Momentum Algorithms are also developed. Through properly designing the neural network plant model and neural network controller, the chaotic dynamical systems are controlled while the parameters of the BP neural network are modified. Taking the Lorenz chaotic system as example, the results show that chaotic dynamical systems can be stabilized at the desired orbits by this control strategy.
Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments
2011-05-03
environments and to understand the implications of eolian transport for environmental processes such as soil and desert pavement formation. The...REPORT Final Report for Eolian Modeling System (EMS): Predicting Windblown Sand and Dust Hazards in Battlefield Environments 14. ABSTRACT 16. SECURITY...Predicting Windblown Sand and Dust Hazards in Battlefield Environments ." The objectives of the research were to 1) develop numerical models for the
Development of Additional Hazard Assessment Models
1977-03-01
liquid drops are obtained from the Hazard Assessment Handbook( 5 ), where h r 0.4 0.33 (16) S0.79 Red Pr This equation was applied for drops of all sizes...0.33 Plass transfer 0,424 -D (R0 S()R 0.63 (Re SO) 063o, C)6 2- R r- Coefficient (r-) Heat Transfer K 04 .3 Coeffcient(h) M0.79 e04 R Pr r 0 I K...depth N/n 2 Pr Prandtl number ~vp Vapor pressure N/rn 140 N r- , r Dimensionless pressure (Equation E.3)ri Pin r radius m .. "V,, i rcb Critical bubble
Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model
Schmidt, James R.
2016-01-01
In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word “red” in green ink) relative to when they are congruent (e.g., “red” in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains. PMID:27899907
Network growth models: A behavioural basis for attachment proportional to fitness
NASA Astrophysics Data System (ADS)
Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris
2017-02-01
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.
Network growth models: A behavioural basis for attachment proportional to fitness
Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris
2017-01-01
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example. PMID:28205599
Bejan-Angoulvant, Theodora; Bouvier, Anne-Marie; Bossard, Nadine; Belot, Aurelien; Jooste, Valérie; Launoy, Guy; Remontet, Laurent
2008-01-01
Hazard regression models and cure rate models can be advantageously used in cancer relative survival analysis. We explored the advantages and limits of these two models in colon cancer and focused on the prognostic impact of the year of diagnosis on survival according to the TNM stage at diagnosis. The analysis concerned 9,998 patients from three French registries. In the hazard regression model, the baseline excess death hazard and the time-dependent effects of covariates were modelled using regression splines. The cure rate model estimated the proportion of 'cured' patients and the excess death hazard in 'non-cured' patients. The effects of year of diagnosis on these parameters were estimated for each TNM cancer stage. With the hazard regression model, the excess death hazard decreased significantly with more recent years of diagnoses (hazard ratio, HR 0.97 in stage III and 0.98 in stage IV, P < 0.001). In these advanced stages, this favourable effect was limited to the first years of follow-up. With the cure rate model, recent years of diagnoses were significantly associated with longer survivals in 'non-cured' patients with advanced stages (HR 0.95 in stage III and 0.97 in stage IV, P < 0.001) but had no significant effect on cure (odds ratio, OR 0.99 in stages III and IV, P > 0.5). The two models were complementary and concordant in estimating colon cancer survival and the effects of covariates. They provided two different points of view of the same phenomenon: recent years of diagnosis had a favourable effect on survival, but not on cure.
Yu, Wenbao; He, Bing; Tan, Kai
2017-09-14
The spatial organization of the genome plays a critical role in regulating gene expression. Recent chromatin interaction mapping studies have revealed that topologically associating domains and subdomains are fundamental building blocks of the three-dimensional genome. Identifying such hierarchical structures is a critical step toward understanding the three-dimensional structure-function relationship of the genome. Existing computational algorithms lack statistical assessment of domain predictions and are computationally inefficient for high-resolution Hi-C data. We introduce the Gaussian Mixture model And Proportion test (GMAP) algorithm to address the above-mentioned challenges. Using simulated and experimental Hi-C data, we show that domains identified by GMAP are more consistent with multiple lines of supporting evidence than three state-of-the-art methods. Application of GMAP to normal and cancer cells reveals several unique features of subdomain boundary as compared to domain boundary, including its higher dynamics across cell types and enrichment for somatic mutations in cancer.Spatial organization of the genome plays a crucial role in regulating gene expression. Here the authors introduce GMAP, the Gaussian Mixture model And Proportion test, to identify topologically associating domains and subdomains in Hi-C data.
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-09-25
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.
Li, Haocheng; Kozey-Keadle, Sarah; Kipnis, Victor; Carroll, Raymond J
2016-01-01
Motivated by physical activity data obtained from the BodyMedia FIT device (www.bodymedia.com), we take a functional data approach for longitudinal studies with continuous proportional outcomes. The functional structure depends on three factors. In our three-factor model, the regression structures are specified as curves measured at various factor-points with random effects that have a correlation structure. The random curve for the continuous factor is summarized using a few important principal components. The difficulties in handling the continuous proportion variables are solved by using a quasilikelihood type approximation. We develop an efficient algorithm to fit the model, which involves the selection of the number of principal components. The method is evaluated empirically by a simulation study. This approach is applied to the BodyMedia data with 935 males and 84 consecutive days of observation, for a total of 78, 540 observations. We show that sleep efficiency increases with increasing physical activity, while its variance decreases at the same time.
A high-resolution global flood hazard model
NASA Astrophysics Data System (ADS)
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
A high-resolution global flood hazard model.
Sampson, Christopher C; Smith, Andrew M; Bates, Paul D; Neal, Jeffrey C; Alfieri, Lorenzo; Freer, Jim E
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ∼90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ∼1 km, mean absolute error in flooded fraction falls to ∼5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
NASA Astrophysics Data System (ADS)
Mirzazadeh, Abolfazl
2009-08-01
The inflation rate in the most of the previous researches has been considered constant and well-known over the time horizon, although the future rate of inflation is inherently uncertain and unstable, and is difficult to predict it accurately. Therefore, A time varying inventory model for deteriorating items with allowable shortages is developed in this paper. The inflation rates (internal and external) are time-dependent and demand rate is inflation-proportional. The inventory level is described by differential equations over the time horizon and present value method is used. The numerical example is given to explain the results. Some particular cases, which follow the main problem, will discuss and the results will compare with the main model by using the numerical examples. It has been achieved which shortages increases considerably in comparison with the case of without variable inflationary conditions.
Probabilistic modelling of rainfall induced landslide hazard assessment
NASA Astrophysics Data System (ADS)
Kawagoe, S.; Kazama, S.; Sarukkalige, P. R.
2010-06-01
To evaluate the frequency and distribution of landslides hazards over Japan, this study uses a probabilistic model based on multiple logistic regression analysis. Study particular concerns several important physical parameters such as hydraulic parameters, geographical parameters and the geological parameters which are considered to be influential in the occurrence of landslides. Sensitivity analysis confirmed that hydrological parameter (hydraulic gradient) is the most influential factor in the occurrence of landslides. Therefore, the hydraulic gradient is used as the main hydraulic parameter; dynamic factor which includes the effect of heavy rainfall and their return period. Using the constructed spatial data-sets, a multiple logistic regression model is applied and landslide hazard probability maps are produced showing the spatial-temporal distribution of landslide hazard probability over Japan. To represent the landslide hazard in different temporal scales, extreme precipitation in 5 years, 30 years, and 100 years return periods are used for the evaluation. The results show that the highest landslide hazard probability exists in the mountain ranges on the western side of Japan (Japan Sea side), including the Hida and Kiso, Iide and the Asahi mountainous range, the south side of Chugoku mountainous range, the south side of Kyusu mountainous and the Dewa mountainous range and the Hokuriku region. The developed landslide hazard probability maps in this study will assist authorities, policy makers and decision makers, who are responsible for infrastructural planning and development, as they can identify landslide-susceptible areas and thus decrease landslide damage through proper preparation.
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Model Uncertainty, Earthquake Hazard, and the WGCEP-2002 Forecast
NASA Astrophysics Data System (ADS)
Page, M. T.; Carlson, J. M.
2005-12-01
Model uncertainty is prevalent in Probabilistic Seismic Hazard Analysis (PSHA) because the true mechanism generating risk is unknown. While it is well-understood how to incorporate parameter uncertainty in PSHA, model uncertainty is more difficult to incorporate due to the high degree of dependence between different earthquake-recurrence models. We find that the method used by the 2002 Working Group on California Earthquake Probabilities (WG02) to combine the probability distributions given by multiple models has several adverse effects on their result. In particular, taking a linear combination of the various models ignores issues of model dependence and leads to large uncertainties in the final hazard estimate. Furthermore, choosing model weights based on data can systematically bias the final probability distribution. The weighting scheme of the WG02 report also depends upon an arbitrary ordering of models. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA.
Prediction accuracy and variable selection for penalized cause-specific hazards models.
Saadati, Maral; Beyersmann, Jan; Kopp-Schneider, Annette; Benner, Axel
2017-08-01
We consider modeling competing risks data in high dimensions using a penalized cause-specific hazards (CSHs) approach. CSHs have conceptual advantages that are useful for analyzing molecular data. First, working on hazards level can further understanding of the underlying biological mechanisms that drive transition hazards. Second, CSH models can be used to extend the multistate framework for high-dimensional data. The CSH approach is implemented by fitting separate proportional hazards models for each event type (iCS). In the high-dimensional setting, this might seem too complex and possibly prone to overfitting. Therefore, we consider an extension, namely "linking" the separate models by choosing penalty tuning parameters that in combination yield best prediction of the incidence of the event of interest (penCR). We investigate whether this extension is useful with respect to prediction accuracy and variable selection. The two approaches are compared to the subdistribution hazards (SDH) model, which is an established method that naturally achieves "linking" by working on incidence level, but loses interpretability of the covariate effects. Our simulation studies indicate that in many aspects, iCS is competitive to penCR and the SDH approach. There are some instances that speak in favor of linking the CSH models, for example, in the presence of opposing effects on the CSHs. We conclude that penalized CSH models are a viable solution for competing risks models in high dimensions. Linking the CSHs can be useful in some particular cases; however, simple models using separately penalized CSH are often justified. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Verma, Rahul K.; Ogihara, Yuki; Kuwabara, Toshihiko; Chung, Kwansoo
2011-08-01
In this work, as non-proportional/non-monotonous deformation experiments, two-stage and tension-compression-tension uniaxial tests were performed, respectively, for a cold rolled ultra high strength dual phase steel sheet: DP780. Deformation behaviors under such deformation paths were found different than those of the ultra low carbon single phase steels observed by Verma et al. (Int. J. Plast. 2011, 82-101). To model the newly observed deformation behaviors, the combined type constitutive law previously proposed by Verma et al. (Int. J. Plast. 2011, 82-101) was successfully applied here. Permanent softening observed during reverse loading was properly characterized into the isotropic and kinematic hardening parts of the hardening law using tension-compression-tension test data. The cross effect observed in two-stage tests was also effectively incorporated into the constitutive law.
NASA Astrophysics Data System (ADS)
Manière, Charles; Lee, Geuntak; Olevsky, Eugene A.
The stability of the proportional-integral-derivative (PID) control of temperature in the spark plasma sintering (SPS) process is investigated. The PID regulations of this process are tested for different SPS tooling dimensions, physical parameters conditions, and areas of temperature control. It is shown that the PID regulation quality strongly depends on the heating time lag between the area of heat generation and the area of the temperature control. Tooling temperature rate maps are studied to reveal potential areas for highly efficient PID control. The convergence of the model and experiment indicates that even with non-optimal initial PID coefficients, it is possible to reduce the temperature regulation inaccuracy to less than 4 K by positioning the temperature control location in highly responsive areas revealed by the finite-element calculations of the temperature spatial distribution.
Hazard Response Modeling Uncertainty (A Quantitative Method)
1988-10-01
Accidental Release Model (HARM) for application to accidental spills at Titan II sites. A rocket exhaust diffusion model developed by the H.E. Cramer...59 . .......... ignored. But Luna and Church (Reference 73), among others, show that the scatter in observed values of a- associated with each...No. 79, Atmospheric Turbulence and Diffusion Laboratory, 1973. 100 73. Luna , R.E., an H.W. Church, "A Comparison of Turbulence Intensity and Stability
Agent-based Modeling with MATSim for Hazards Evacuation Planning
NASA Astrophysics Data System (ADS)
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
TsuPy: Computational robustness in Tsunami hazard modelling
NASA Astrophysics Data System (ADS)
Schäfer, Andreas M.; Wenzel, Friedemann
2017-05-01
Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.
Shadman, Ramin; Poole, Jeanne E; Dardas, Todd F; Mozaffarian, Dariush; Cleland, John G F; Swedberg, Karl; Maggioni, Aldo P; Anand, Inder S; Carson, Peter E; Miller, Alan B; Levy, Wayne C
2015-10-01
Patients with heart failure are at increased risk of both sudden death and pump failure death. Strategies to better identify those who have greatest net benefit from implantable cardioverter-defibrillator (ICD) implantation could reduce morbidity and maximize cost-effectiveness of ICDs. We aimed to identify baseline variables in patients with cardiomyopathy that are independently associated with a disproportionate fraction of mortality risk attributable to sudden death vs nonsudden death. We used data from 9885 patients with heart failure without ICDs, of whom 2552 died during an average follow-up of 2.3 years. Using commonly available baseline clinical and demographic variables, we developed a multivariate regression model to identify variables associated with a disproportionate risk of sudden death. We confirmed that lower ejection fraction and better functional class were associated with a greater proportion of mortality due to sudden death. Younger age, male sex, and higher body mass index were independently associated with a greater proportional risk of sudden death, while diabetes mellitus, hyper/hypotension, higher creatinine level, and hyponatremia were associated with a disproportionately lower risk of sudden death. The use of several heart failure medications, left ventricular end-diastolic dimension, or NT-pro brain natriuretic peptide concentrations were not associated with a disproportionate risk of sudden death. Several easily obtained baseline demographic and clinical variables, beyond ejection fraction and New York Heart Association functional class, are independently associated with a disproportionately increased risk of sudden death. Further investigation is needed to assess whether this novel predictive method can be used to target the use of lifesaving therapies to populations who will derive greatest mortality benefit . Copyright © 2015 Heart Rhythm Society. All rights reserved.
ERIC Educational Resources Information Center
Wright, Vince
2014-01-01
Pirie and Kieren (1989 "For the learning of mathematics", 9(3)7-11, 1992 "Journal of Mathematical Behavior", 11, 243-257, 1994a "Educational Studies in Mathematics", 26, 61-86, 1994b "For the Learning of Mathematics":, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which…
ERIC Educational Resources Information Center
Wright, Vince
2014-01-01
Pirie and Kieren (1989 "For the learning of mathematics", 9(3)7-11, 1992 "Journal of Mathematical Behavior", 11, 243-257, 1994a "Educational Studies in Mathematics", 26, 61-86, 1994b "For the Learning of Mathematics":, 14(1)39-43) created a model (P-K) that describes a dynamic and recursive process by which…
Wang, Jun-Wei; Wu, Huai-Ning; Li, Han-Xiong
2012-06-01
In this paper, a distributed fuzzy control design based on Proportional-spatial Derivative (P-sD) is proposed for the exponential stabilization of a class of nonlinear spatially distributed systems described by parabolic partial differential equations (PDEs). Initially, a Takagi-Sugeno (T-S) fuzzy parabolic PDE model is proposed to accurately represent the nonlinear parabolic PDE system. Then, based on the T-S fuzzy PDE model, a novel distributed fuzzy P-sD state feedback controller is developed by combining the PDE theory and the Lyapunov technique, such that the closed-loop PDE system is exponentially stable with a given decay rate. The sufficient condition on the existence of an exponentially stabilizing fuzzy controller is given in terms of a set of spatial differential linear matrix inequalities (SDLMIs). A recursive algorithm based on the finite-difference approximation and the linear matrix inequality (LMI) techniques is also provided to solve these SDLMIs. Finally, the developed design methodology is successfully applied to the feedback control of the Fitz-Hugh-Nagumo equation.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Self-organization, the cascade model, and natural hazards
Turcotte, Donald L.; Malamud, Bruce D.; Guzzetti, Fausto; Reichenbach, Paola
2002-01-01
We consider the frequency-size statistics of two natural hazards, forest fires and landslides. Both appear to satisfy power-law (fractal) distributions to a good approximation under a wide variety of conditions. Two simple cellular-automata models have been proposed as analogs for this observed behavior, the forest fire model for forest fires and the sand pile model for landslides. The behavior of these models can be understood in terms of a self-similar inverse cascade. For the forest fire model the cascade consists of the coalescence of clusters of trees; for the sand pile model the cascade consists of the coalescence of metastable regions. PMID:11875206
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Ned; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
Sun, Mei zhen; van Rijn, Clementina M; Liu, Yu xi; Wang, Ming zheng
2002-09-01
Rational polypharmacy of antiepileptic drugs is one of the treatment strategies for refractory epilepsy. To investigate whether it may be rational to combine carbamazepine (CBZ) and valproate (VPA), we tested both the anti-convulsant effect and the toxicity of combinations of CBZ and VPA in different dose proportions. CBZ/VPA dose ratios were, respectively, 1:6.66, 1:10, 1:13.3 and 1:20. The median effect doses of monotherapy and polytherapy in maximal electroshock seizure test and the median lethal (within 3 days after administration) doses were determined. These parameters were analyzed with the isobologram method. We found that the anti-convulsant effect of all combinations was additive. The toxicity of combination 1, 2 and 3 (CBZ/VPA, 1:6.66, 1:10, 1:13.3) was additive, but the toxicity of combination 4 (CBZ/VPA, 1:20) was infra-additive. Thus, in mice, using this model, a combination of CBZ/VPA 1:20 has an advantage over each of the drugs alone.
Modeling of Marine Natural Hazards in the Lesser Antilles
NASA Astrophysics Data System (ADS)
Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim
2010-05-01
The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I
Three multimedia models used at hazardous and radioactive waste sites
1996-01-01
The report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. The study focused on three specific models: MEPAS version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. The approach to model review advocated in the study is directed to technical staff responsible for identifying, selecting and applying multimedia models for use at sites containing radioactive and hazardous materials. In the report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted.
Current Methods of Natural Hazards Communication used within Catastrophe Modelling
NASA Astrophysics Data System (ADS)
Dawber, C.; Latchman, S.
2012-04-01
In the field of catastrophe modelling, natural hazards need to be explained every day to (re)insurance professionals so that they may understand estimates of the loss potential of their portfolio. The effective communication of natural hazards to city professionals requires different strategies to be taken depending on the audience, their prior knowledge and respective backgrounds. It is best to have at least three sets of tools in your arsenal for a specific topic, 1) an illustration/animation, 2) a mathematical formula and 3) a real world case study example. This multi-faceted approach will be effective for those that learn best by pictorial means, mathematical means or anecdotal means. To show this we will use a set of real examples employed in the insurance industry of how different aspects of natural hazards and the uncertainty around them are explained to city professionals. For example, explaining the different modules within a catastrophe model such as the hazard, vulnerability and loss modules. We highlight how recent technology such as 3d plots, video recording and Google Earth maps, when used properly can help explain concepts quickly and easily. Finally we also examine the pitfalls of using overly-complicated visualisations and in general how counter-intuitive deductions may be made.
Probabilistic modelling of rainfall induced landslide hazard assessment
NASA Astrophysics Data System (ADS)
Kawagoe, S.; Kazama, S.; Sarukkalige, P. R.
2010-01-01
To evaluate the frequency and distribution of landslides hazards over Japan, this study uses a probabilistic model based on multiple logistic regression analysis. Study particular concerns several important physical parameters such as hydraulic parameters, geographical parameters and the geological parameters which are considered to be influential in the occurrence of landslides. Sensitivity analysis confirmed that hydrological parameter (hydraulic gradient) is the most influential factor in the occurrence of landslides. Therefore, the hydraulic gradient is used as the main hydraulic parameter; dynamic factor which includes the effect of heavy rainfall and their return period. Using the constructed spatial data-sets, a multiple logistic regression model is applied and landslide susceptibility maps are produced showing the spatial-temporal distribution of landslide hazard susceptibility over Japan. To represent the susceptibility in different temporal scales, extreme precipitation in 5 years, 30 years, and 100 years return periods are used for the evaluation. The results show that the highest landslide hazard susceptibility exists in the mountain ranges on the western side of Japan (Japan Sea side), including the Hida and Kiso, Iide and the Asahi mountainous range, the south side of Chugoku mountainous range, the south side of Kyusu mountainous and the Dewa mountainous range and the Hokuriku region. The developed landslide hazard susceptibility maps in this study will assist authorities, policy makers and decision makers, who are responsible for infrastructural planning and development, as they can identify landslide-susceptible areas and thus decrease landslide damage through proper preparation.
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
Simulation meets reality: Chemical hazard models in real world use
Newsom, D.E.
1992-01-01
In 1989 the US Department of Transportation (DOT), Federal Emergency Management Agency (FEMA), and US Environmental Protection Agency (EPA) released a set of models for analysis of chemical hazards on personal computers. The models, known collectively as ARCHIE (Automated Resource for Chemical Hazard Incident Evaluation), have been distributed free of charge to thousands of emergency planners and analysts in state governments, Local Emergency Planning Committees (LEPCs), and industry. Under DOT and FEMA sponsorship Argonne National Laboratory (ANL) conducted workshops in 1990 and 1991 to train federal state local government, and industry personnel, both end users and other trainers, in the use of the models. As a result of these distribution and training efforts ARCHIE has received substantial use by state, local and industrial emergency management personnel.
Rockfall hazard analysis using LiDAR and spatial modeling
NASA Astrophysics Data System (ADS)
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
Disproportionate Proximity to Environmental Health Hazards: Methods, Models, and Measurement
Maantay, Juliana A.; Brender, Jean D.
2011-01-01
We sought to provide a historical overview of methods, models, and data used in the environmental justice (EJ) research literature to measure proximity to environmental hazards and potential exposure to their adverse health effects. We explored how the assessment of disproportionate proximity and exposure has evolved from comparing the prevalence of minority or low-income residents in geographic entities hosting pollution sources and discrete buffer zones to more refined techniques that use continuous distances, pollutant fate-and-transport models, and estimates of health risk from toxic exposure. We also reviewed analytical techniques used to determine the characteristics of people residing in areas potentially exposed to environmental hazards and emerging geostatistical techniques that are more appropriate for EJ analysis than conventional statistical methods. We concluded by providing several recommendations regarding future research and data needs for EJ assessment that would lead to more reliable results and policy solutions. PMID:21836113
Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John
2015-01-01
Objectives To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. Design A retrospective database analysis. Setting General practices in the UK and Ireland. Participants 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Main outcome measures Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. Results CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the ‘postanalytical test stage’ (n=702, 43.8%), followed closely by ‘communication outcomes issues’ (n=628, 39.1%). Conclusions Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. PMID:26614621
Recent Experiences in Aftershock Hazard Modelling in New Zealand
NASA Astrophysics Data System (ADS)
Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.
2014-12-01
The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock
Helping Children to Model Proportionally in Group Argumentation: Overcoming the "Constant Sum" Error
ERIC Educational Resources Information Center
Misailidou, Christina; Williams, Jullian
2004-01-01
We examine eight cases of argumentation in relation to a proportional reasoning task--the "Paint" task--in which the "constant sum" strategy was a significant factor. Our analysis of argument follows Toulmin's (1958) approach and in the discourse we trace factors which seem to facilitate changes in argument. We find that the arguments of "constant…
Proportionality: Modeling the Future. NASA Connect: Program 6 in the 1999-2000 Series.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.
This teaching unit is designed to help students in grades 4-8 explore the concepts of scaling and proportion in the context of spacecraft design. The units in this series have been developed to enhance and enrich mathematics, science, and technology education and to accommodate different teaching and learning styles. Each unit consists of a…
Multivariate models for prediction of human skin sensitization hazard.
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2017-03-01
One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US
NASA Astrophysics Data System (ADS)
Bajo Sanchez, Jorge V.
This dissertation is composed of an introductory chapter and three papers about vulnerability and volcanic hazard maps with emphasis on lahars. The introductory chapter reviews definitions of the term vulnerability by the social and natural hazard community and it provides a new definition of hazard vulnerability that includes social and natural hazard factors. The first paper explains how the Community Volcanic Hazard Map (CVHM) is used for vulnerability analysis and explains in detail a new methodology to obtain valuable information about ethnophysiographic differences, hazards, and landscape knowledge of communities in the area of interest: the Canton Buenos Aires situated on the northern flank of the Santa Ana (Ilamatepec) Volcano, El Salvador. The second paper is about creating a lahar hazard map in data poor environments by generating a landslide inventory and obtaining potential volumes of dry material that can potentially be carried by lahars. The third paper introduces an innovative lahar hazard map integrating information generated by the previous two papers. It shows the differences in hazard maps created by the communities and experts both visually as well as quantitatively. This new, integrated hazard map was presented to the community with positive feedback and acceptance. The dissertation concludes with a summary chapter on the results and recommendations.
Mathematical-statistical models of generated hazardous hospital solid waste.
Awad, A R; Obeidat, M; Al-Shareef, M
2004-01-01
This research work was carried out under the assumption that wastes generated from hospitals in Irbid, Jordan were hazardous. The hazardous and non-hazardous wastes generated from the different divisions in the three hospitals under consideration were not separated during collection process. Three hospitals, Princess Basma hospital (public), Princess Bade'ah hospital (teaching), and Ibn Al-Nafis hospital (private) in Irbid were selected for this study. The research work took into account the amounts of solid waste accumulated from each division and also determined the total amount generated from each hospital. The generation rates were determined (kilogram per patient, per day; kilogram per bed, per day) for the three hospitals. These generation rates were compared with similar hospitals in Europe. The evaluation suggested that the current situation regarding the management of these wastes in the three studied hospitals needs revision as these hospitals do not follow methods of waste disposals that would reduce risk to human health and the environment practiced in developed countries. Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching, private). In these models number of patients, beds, and type of hospital were revealed to be significant factors on quantity of waste generated. Multiple regressions were also used to estimate the quantities of wastes generated from similar divisions in the three hospitals (surgery, internal diseases, and maternity).
Babapour, R; Naghdi, R; Ghajar, I; Ghodsi, R
2015-07-01
Rock proportion of subsoil directly influences the cost of embankment in forest road construction. Therefore, developing a reliable framework for rock ratio estimation prior to the road planning could lead to more light excavation and less cost operations. Prediction of rock proportion was subjected to statistical analyses using the application of Artificial Neural Network (ANN) in MATLAB and five link functions of ordinal logistic regression (OLR) according to the rock type and terrain slope properties. In addition to bed rock and slope maps, more than 100 sample data of rock proportion were collected, observed by geologists, from any available bed rock of every slope class. Four predictive models were developed for rock proportion, employing independent variables and applying both the selected probit link function of OLR and Layer Recurrent and Feed forward back propagation networks of Neural Networks. In ANN, different numbers of neurons are considered for the hidden layer(s). Goodness of the fit measures distinguished that ANN models produced better results than OLR with R (2) = 0.72 and Root Mean Square Error = 0.42. Furthermore, in order to show the applicability of the proposed approach, and to illustrate the variability of rock proportion resulted from the model application, the optimum models were applied to a mountainous forest in where forest road network had been constructed in the past.
Development of hazard-compatible building fragility and vulnerability models
Karaca, E.; Luco, N.
2008-01-01
We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.
Techniques for modeling hazardous air pollutant emissions from landfills
Lang, R.J.; Vigil, S.A.; Melcer, H.
1998-12-31
The Environmental Protection Agency`s Landfill Air Estimation Model (LAEEM), combined with either the AP-42 or CAA landfill emission factors, provide a basis to predict air emissions, including hazardous air pollutants (HAPs), from municipal solid waste landfills. This paper presents alternative approaches for estimating HAP emissions from landfills. These approaches include analytical solutions and estimation techniques that account for convection, diffusion, and biodegradation of HAPs. Results from the modeling of a prototypical landfill are used as the basis for discussion with respect to LAEEM results
Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R
2014-11-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.
A Computerized Prediction Model of Hazardous Inflammatory Platelet Transfusion Outcomes
Nguyen, Kim Anh; Hamzeh-Cognasse, Hind; Sebban, Marc; Fromont, Elisa; Chavarin, Patricia; Absi, Lena; Pozzetto, Bruno; Cognasse, Fabrice; Garraud, Olivier
2014-01-01
Background Platelet component (PC) transfusion leads occasionally to inflammatory hazards. Certain BRMs that are secreted by the platelets themselves during storage may have some responsibility. Methodology/Principal Findings First, we identified non-stochastic arrangements of platelet-secreted BRMs in platelet components that led to acute transfusion reactions (ATRs). These data provide formal clinical evidence that platelets generate secretion profiles under both sterile activation and pathological conditions. We next aimed to predict the risk of hazardous outcomes by establishing statistical models based on the associations of BRMs within the incriminated platelet components and using decision trees. We investigated a large (n = 65) series of ATRs after platelet component transfusions reported through a very homogenous system at one university hospital. Herein, we used a combination of clinical observations, ex vivo and in vitro investigations, and mathematical modeling systems. We calculated the statistical association of a large variety (n = 17) of cytokines, chemokines, and physiologically likely factors with acute inflammatory potential in patients presenting with severe hazards. We then generated an accident prediction model that proved to be dependent on the level (amount) of a given cytokine-like platelet product within the indicated component, e.g., soluble CD40-ligand (>289.5 pg/109 platelets), or the presence of another secreted factor (IL-13, >0). We further modeled the risk of the patient presenting either a febrile non-hemolytic transfusion reaction or an atypical allergic transfusion reaction, depending on the amount of the chemokine MIP-1α (<20.4 or >20.4 pg/109 platelets, respectively). Conclusions/Significance This allows the modeling of a policy of risk prevention for severe inflammatory outcomes in PC transfusion. PMID:24830754
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
High Risk versus Proportional Benefit: Modelling Equitable Strategies in Cardiovascular Prevention.
Marchant, Ivanny; Boissel, Jean-Pierre; Nony, Patrice; Gueyffier, François
2015-01-01
To examine the performances of an alternative strategy to decide initiating BP-lowering drugs called Proportional Benefit (PB). It selects candidates addressing the inequity induced by the high-risk approach since it distributes the gains proportionally to the burden of disease by genders and ages. Mild hypertensives from a Realistic Virtual Population by genders and 10-year age classes (range 35-64 years) received simulated treatment over 10 years according to the PB strategy or the 2007 ESH/ESC guidelines (ESH/ESC). Primary outcomes were the relative life-year gain (life-years gained-to-years of potential life lost ratio) and the number needed to treat to gain a life-year. A sensitivity analysis was performed to assess the impact of changes introduced by the ESH/ESC guidelines appeared in 2013 on these outcomes. The 2007 ESH/ESC relative life-year gains by ages were 2%; 10%; 14% in men, and 0%; 2%; 11% in women, this gradient being abolished by the PB (relative gain in all categories = 10%), while preserving the same overall gain in life-years. The redistribution of benefits improved the profile of residual events in younger individuals compared to the 2007 ESH/ESC guidelines. The PB strategy was more efficient (NNT = 131) than the 2013 ESH/ESC guidelines, whatever the level of evidence of the scenario adopted (NNT = 139 and NNT = 179 with the evidence-based scenario and the opinion-based scenario, respectively), although the 2007 ESH/ESC guidelines remained the most efficient strategy (NNT = 114). The Proportional Benefit strategy provides the first response ever proposed against the inequity of resource use when treating highest risk people. It occupies an intermediate position with regards to the efficiency expected from the application of historical and current ESH/ESC hypertension guidelines. Our approach allows adapting recommendations to the risk and resources of a particular country.
Sinkhole hazard assessment in Minnesota using a decision tree model
NASA Astrophysics Data System (ADS)
Gao, Yongli; Alexander, E. Calvin
2008-05-01
An understanding of what influences sinkhole formation and the ability to accurately predict sinkhole hazards is critical to environmental management efforts in the karst lands of southeastern Minnesota. Based on the distribution of distances to the nearest sinkhole, sinkhole density, bedrock geology and depth to bedrock in southeastern Minnesota and northwestern Iowa, a decision tree model has been developed to construct maps of sinkhole probability in Minnesota. The decision tree model was converted as cartographic models and implemented in ArcGIS to create a preliminary sinkhole probability map in Goodhue, Wabasha, Olmsted, Fillmore, and Mower Counties. This model quantifies bedrock geology, depth to bedrock, sinkhole density, and neighborhood effects in southeastern Minnesota but excludes potential controlling factors such as structural control, topographic settings, human activities and land-use. The sinkhole probability map needs to be verified and updated as more sinkholes are mapped and more information about sinkhole formation is obtained.
Mark-specific hazard ratio model with missing multivariate marks.
Juraska, Michal; Gilbert, Peter B
2016-10-01
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial.
Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Giustarini, Laura; Chini, Marci; Hostache, Renaud; Matgen, Patrick; Pappenberger, Florian; Bally, Phillippe
2015-04-01
With flood consequences likely to amplify because of growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are greatly needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method is presented to integrate global flood inundation modeling and microwave remote sensing. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers the opportunity to estimate flood non-exceedance probabilities in a robust way. The probabilities can later be attributed to historical satellite observations. SAR-derived flood extent maps with their associated non-exceedance probabilities are then combined to generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. The method can be applied to any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. We applied the method on the Severn River (UK) and on the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. An additional analysis has been performed on the Severn River, using high resolution SAR data from the COSMO-SkyMed SAR constellation, acquired for a single flood event (one flood map per day between 27/11/2012 and 4/12/2012). The results showed that it is vital to observe the peak of the flood. However, a single
Statistical modeling of ground motion relations for seismic hazard analysis
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2013-10-01
We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area equivalence, wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp( ɛ 0) of Joyner and Boore, Bull Seism Soc Am 83(2):469-487, 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions, which indicate the appropriate modeling of the GMR by an anisotropic
High Risk versus Proportional Benefit: Modelling Equitable Strategies in Cardiovascular Prevention
Boissel, Jean-Pierre; Nony, Patrice
2015-01-01
Objective To examine the performances of an alternative strategy to decide initiating BP-lowering drugs called Proportional Benefit (PB). It selects candidates addressing the inequity induced by the high-risk approach since it distributes the gains proportionally to the burden of disease by genders and ages. Study Design and Setting Mild hypertensives from a Realistic Virtual Population by genders and 10-year age classes (range 35–64 years) received simulated treatment over 10 years according to the PB strategy or the 2007 ESH/ESC guidelines (ESH/ESC). Primary outcomes were the relative life-year gain (life-years gained-to-years of potential life lost ratio) and the number needed to treat to gain a life-year. A sensitivity analysis was performed to assess the impact of changes introduced by the ESH/ESC guidelines appeared in 2013 on these outcomes. Results The 2007 ESH/ESC relative life-year gains by ages were 2%; 10%; 14% in men, and 0%; 2%; 11% in women, this gradient being abolished by the PB (relative gain in all categories = 10%), while preserving the same overall gain in life-years. The redistribution of benefits improved the profile of residual events in younger individuals compared to the 2007 ESH/ESC guidelines. The PB strategy was more efficient (NNT = 131) than the 2013 ESH/ESC guidelines, whatever the level of evidence of the scenario adopted (NNT = 139 and NNT = 179 with the evidence-based scenario and the opinion-based scenario, respectively), although the 2007 ESH/ESC guidelines remained the most efficient strategy (NNT = 114). Conclusion The Proportional Benefit strategy provides the first response ever proposed against the inequity of resource use when treating highest risk people. It occupies an intermediate position with regards to the efficiency expected from the application of historical and current ESH/ESC hypertension guidelines. Our approach allows adapting recommendations to the risk and resources of a particular country. PMID:26529507
Karin Riley; Matthew Thompson; Peter Webley; Kevin D. Hyde
2017-01-01
Modeling has been used to characterize and map natural hazards and hazard susceptibility for decades. Uncertainties are pervasive in natural hazards analysis, including a limited ability to predict where and when extreme events will occur, with what consequences, and driven by what contributing factors. Modeling efforts are challenged by the intrinsic...
Opinion: The use of natural hazard modeling for decision making under uncertainty
David E. Calkin; Mike Mentis
2015-01-01
Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
Li Yupeng Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.
A parametric model to estimate the proportion from true null using a distribution for p-values.
Yu, Chang; Zelterman, Daniel
2017-10-01
Microarray studies generate a large number of p-values from many gene expression comparisons. The estimate of the proportion of the p-values sampled from the null hypothesis draws broad interest. The two-component mixture model is often used to estimate this proportion. If the data are generated under the null hypothesis, the p-values follow the uniform distribution. What is the distribution of p-values when data are sampled from the alternative hypothesis? The distribution is derived for the chi-squared test. Then this distribution is used to estimate the proportion of p-values sampled from the null hypothesis in a parametric framework. Simulation studies are conducted to evaluate its performance in comparison with five recent methods. Even in scenarios with clusters of correlated p-values and a multicomponent mixture or a continuous mixture in the alternative, the new method performs robustly. The methods are demonstrated through an analysis of a real microarray dataset.
Application of hazard models for patients with breast cancer in Cuba
Alfonso, Anet Garcia; de Oca, Néstor Arcia Montes
2011-01-01
There has been a rapid development in hazard models and survival analysis in the last decade. This article aims to assess the overall survival time of breast cancer in Cuba, as well as to determine plausible factors that may have a significant impact in the survival time. The data are obtained from the National Cancer Register of Cuba. The data set used in this study relates to 6381 patients diagnosed with breast cancer between January 2000 and December 2002. Follow-up data are available until the end of December 2007, by which time 2167 (33.9%) had died and 4214 (66.1%) were still alive. The adequacy of six parametric models is assessed by using their Akaike information criterion values. Five of the six parametric models (Exponential, Weibull, Log-logistic, Lognormal, and Generalized Gamma) are parameterized by using the accelerated failure-time metric, and the Gompertz model is parameterized by using the proportional hazard metric. The main result in terms of survival is found for the different categories of the clinical stage covariate. The survival time among patients who have been diagnosed at early stage of breast cancer is about 60% higher than the one among patients diagnosed at more advanced stage of the disease. Differences among provinces have not been found. The age is another significant factor, but there is no important difference between patient ages. PMID:21686138
Household hazardous waste disposal to landfill: using LandSim to model leachate migration.
Slack, Rebecca J; Gronow, Jan R; Hall, David H; Voulvoulis, Nikolaos
2007-03-01
Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW.
Conveying Lava Flow Hazards Through Interactive Computer Models
NASA Astrophysics Data System (ADS)
Thomas, D.; Edwards, H. K.; Harnish, E. P.
2007-12-01
As part of an Information Sciences senior class project, a software package of an interactive version of the FLOWGO model was developed for the Island of Hawaii. The software is intended for use in an ongoing public outreach and hazards awareness program that educates the public about lava flow hazards on the island. The design parameters for the model allow an unsophisticated user to initiate a lava flow anywhere on the island and allow it to flow down-slope to the shoreline while displaying a timer to show the rate of advance of the flow. The user is also able to modify a range of input parameters including eruption rate, the temperature of the lava at the vent, and crystal fraction present in the lava at the source. The flow trajectories are computed using a 30 m digital elevation model for the island and the rate of advance of the flow is estimated using the average slope angle and the computed viscosity of the lava as it cools in either a channel (high heat loss) or lava tube (low heat loss). Even though the FLOWGO model is not intended to, and cannot, accurately predict the rate of advance of a tube- fed or channel-fed flow, the relative rates of flow advance for steep or flat-lying terrain convey critically important hazard information to the public: communities located on the steeply sloping western flanks of Mauna Loa may have no more than a few hours to evacuate in the face of a threatened flow from Mauna Loa's southwest rift whereas communities on the more gently sloping eastern flanks of Mauna Loa and Kilauea may have weeks to months to prepare for evacuation. Further, the model also can show the effects of loss of critical infrastructure with consequent impacts on access into and out of communities, loss of electrical supply, and communications as a result of lava flow implacement. The interactive model has been well received in an outreach setting and typically generates greater involvement by the participants than has been the case with static maps
Garcia-Lodeiro, Inés; Donatello, Shane; Fernández-Jiménez, Ana; Palomo, Ángel
2016-01-01
In hybrid alkaline fly ash cements, a new generation of binders, hydration, is characterized by features found in both ordinary portland cement (OPC) hydration and the alkali activation of fly ash (AAFA). Hybrid alkaline fly ash cements typically have a high fly ash (70 wt % to 80 wt %) and low clinker (20 wt % to 30 wt %) content. The clinker component favors curing at ambient temperature. A hydration mechanism is proposed based on the authors’ research on these hybrid binders over the last five years. The mechanisms for OPC hydration and FA alkaline activation are summarized by way of reference. In hybrid systems, fly ash activity is visible at very early ages, when two types of gel are formed: C–S–H from the OPC and N–A–S–H from the fly ash. In their mutual presence, these gels tend to evolve, respectively, into C–A–S–H and (N,C)–A–S–H. The use of activators with different degrees of alkalinity has a direct impact on reaction kinetics but does not modify the main final products, a mixture of C–A–S–H and (N,C)–A–S–H gels. The proportion of each gel in the mix does, however, depend on the alkalinity generated in the medium. PMID:28773728
Garcia-Lodeiro, Inés; Donatello, Shane; Fernández-Jiménez, Ana; Palomo, Ángel
2016-07-22
In hybrid alkaline fly ash cements, a new generation of binders, hydration, is characterized by features found in both ordinary portland cement (OPC) hydration and the alkali activation of fly ash (AAFA). Hybrid alkaline fly ash cements typically have a high fly ash (70 wt % to 80 wt %) and low clinker (20 wt % to 30 wt %) content. The clinker component favors curing at ambient temperature. A hydration mechanism is proposed based on the authors' research on these hybrid binders over the last five years. The mechanisms for OPC hydration and FA alkaline activation are summarized by way of reference. In hybrid systems, fly ash activity is visible at very early ages, when two types of gel are formed: C-S-H from the OPC and N-A-S-H from the fly ash. In their mutual presence, these gels tend to evolve, respectively, into C-A-S-H and (N,C)-A-S-H. The use of activators with different degrees of alkalinity has a direct impact on reaction kinetics but does not modify the main final products, a mixture of C-A-S-H and (N,C)-A-S-H gels. The proportion of each gel in the mix does, however, depend on the alkalinity generated in the medium.
Hazard based models for freeway traffic incident duration.
Tavassoli Hojati, Ahmad; Ferreira, Luis; Washington, Simon; Charles, Phil
2013-03-01
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Natural hazard resilient cities: the case of a SSMS model
NASA Astrophysics Data System (ADS)
Santos-Reyes, Jaime
2010-05-01
Modern society is characterised by complexity; i.e. technical systems are highly complex and highly interdependent. The nature of the interdependence amongst these systems has become an issue on increasing importance in recent years. Moreover, these systems face a number threats ranging from technical, human and natural. For example, natural hazards (earthquakes, floods, heavy snow, etc) can cause significant problems and disruption to normal life. On the other hand, modern society depends on highly interdependent infrastructures such as transport (rail, road, air, etc), telecommunications, power and water supply, etc. Furthermore, in many cases there is no single owner, operator, and regulator of such systems. Any disruption in any of the interconnected systems may cause a domino-effect. The domino-effect may occur at local, regional or at national level; or, in some cases; it may be extended across international borders. Given the above, it may be argued that society is less resilient to such events and therefore there is a need to have a system in place able to maintain risk within an acceptable range, whatever that might be. This paper presents the modelling process of the interdependences amongst "critical infrastructures" (i.e. transport, telecommunications, power & water supply, etc) for a typical city. The approach has been the application of the developed Systemic Safety Management System (SSMS) model. The main conclusion is that the SSMS model has the potentiality to be used to model interdependencies amongst the so called "critical infrastructures". It is hoped that the approach presented in this paper may help to gain a better understanding of the interdependence amongst these systems and may contribute to a resilient society when disrupted by natural hazards.
Modeling and mitigating natural hazards: Stationarity is immortal!
NASA Astrophysics Data System (ADS)
Montanari, Alberto; Koutsoyiannis, Demetris
2014-12-01
Environmental change is a reason of relevant concern as it is occurring at an unprecedented pace and might increase natural hazards. Moreover, it is deemed to imply a reduced representativity of past experience and data on extreme hydroclimatic events. The latter concern has been epitomized by the statement that "stationarity is dead." Setting up policies for mitigating natural hazards, including those triggered by floods and droughts, is an urgent priority in many countries, which implies practical activities of management, engineering design, and construction. These latter necessarily need to be properly informed, and therefore, the research question on the value of past data is extremely important. We herein argue that there are mechanisms in hydrological systems that are time invariant, which may need to be interpreted through data inference. In particular, hydrological predictions are based on assumptions which should include stationarity. In fact, any hydrological model, including deterministic and nonstationary approaches, is affected by uncertainty and therefore should include a random component that is stationary. Given that an unnecessary resort to nonstationarity may imply a reduction of predictive capabilities, a pragmatic approach, based on the exploitation of past experience and data is a necessary prerequisite for setting up mitigation policies for environmental risk.
Hidden Markov models for estimating animal mortality from anthropogenic hazards.
Etterson, Matthew A
2013-12-01
Carcass searches are a common method for studying the risk of anthropogenic hazards to wildlife, including nontarget poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. Parameters for these processes (scavenging and detection) are often estimated using carcass distribution trials in which researchers place carcasses in the field at known times and locations. In this manuscript I develop a variety of estimators based on multi-event or hidden Markov models for use under different experimental conditions. I apply the estimators to two case studies of avian mortality, one from pesticide exposure and another at wind turbines. The proposed framework for mortality estimation points to a unified framework for estimation of scavenging rates and searcher efficiency in a single trial and also allows estimation based only on accidental kills, obviating the need for carcass distribution trials. Results of the case studies show wide variation in the performance of different estimators, but even wider confidence intervals around estimates of the numbers of animals killed, which are the direct result of small sample size in the carcass distribution trials employed. These results also highlight the importance of a well-formed hypothesis about the temporal nature of mortality at the focal hazard under study.
VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation
NASA Astrophysics Data System (ADS)
Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.
2009-12-01
Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the
Advancements in the global modelling of coastal flood hazard
NASA Astrophysics Data System (ADS)
Muis, Sanne; Verlaan, Martin; Nicholls, Robert J.; Brown, Sally; Hinkel, Jochen; Lincke, Daniel; Vafeidis, Athanasios T.; Scussolini, Paolo; Winsemius, Hessel C.; Ward, Philip J.
2017-04-01
Storm surges and high tides can cause catastrophic floods. Due to climate change and socio-economic development the potential impacts of coastal floods are increasing globally. Global modelling of coastal flood hazard provides an important perspective to quantify and effectively manage this challenge. In this contribution we show two recent advancements in global modelling of coastal flood hazard: 1) a new improved global dataset of extreme sea levels, and 2) an improved vertical datum for extreme sea levels. Both developments have important implications for estimates of exposure and inundation modelling. For over a decade, the only global dataset of extreme sea levels was the DINAS-COAST Extreme Sea Levels (DCESL), which uses a static approximation to estimate total water levels for different return periods. Recent advances have enabled the development of a new dynamically derived dataset: the Global Tide and Surge Reanalysis (GTSR) dataset. Here we present a comparison of the DCESL and GTSR extreme sea levels and the resulting global flood exposure for present-day conditions. While DCESL generally overestimates extremes, GTSR underestimates extremes, particularly in the tropics. This results in differences in estimates of flood exposure. When using the 1 in 100-year GTSR extremes, the exposed global population is 28% lower than when using the 1 in 100-year DCESL extremes. Previous studies at continental to global-scales have not accounted for the fact that GTSR and DCESL are referenced to mean sea level, whereas global elevation datasets, such as SRTM, are referenced to the EGM96 geoid. We propose a methodology to correct for the difference in vertical datum and demonstrate that this also has a large effect on exposure. For GTSR, the vertical datum correction results in a 60% increase in global exposure.
Lava flow hazard at Nyiragongo volcano, D.R.C.. 1. Model calibration and hazard mapping
NASA Astrophysics Data System (ADS)
Favalli, Massimiliano; Chirico, Giuseppe D.; Papale, Paolo; Pareschi, Maria Teresa; Boschi, Enzo
2009-05-01
The 2002 eruption of Nyiragongo volcano constitutes the most outstanding case ever of lava flow in a big town. It also represents one of the very rare cases of direct casualties from lava flows, which had high velocities of up to tens of kilometer per hour. As in the 1977 eruption, which is the only other eccentric eruption of the volcano in more than 100 years, lava flows were emitted from several vents along a N-S system of fractures extending for more than 10 km, from which they propagated mostly towards Lake Kivu and Goma, a town of about 500,000 inhabitants. We assessed the lava flow hazard on the entire volcano and in the towns of Goma (D.R.C.) and Gisenyi (Rwanda) through numerical simulations of probable lava flow paths. Lava flow paths are computed based on the steepest descent principle, modified by stochastically perturbing the topography to take into account the capability of lava flows to override topographic obstacles, fill topographic depressions, and spread over the topography. Code calibration and the definition of the expected lava flow length and vent opening probability distributions were done based on the 1977 and 2002 eruptions. The final lava flow hazard map shows that the eastern sector of Goma devastated in 2002 represents the area of highest hazard on the flanks of the volcano. The second highest hazard sector in Goma is the area of propagation of the western lava flow in 2002. The town of Gisenyi is subject to moderate to high hazard due to its proximity to the alignment of fractures active in 1977 and 2002. In a companion paper (Chirico et al., Bull Volcanol, in this issue, 2008) we use numerical simulations to investigate the possibility of reducing lava flow hazard through the construction of protective barriers, and formulate a proposal for the future development of the town of Goma.
Mei, J.; Dong, P.; Kalnaus, S.; ...
2017-07-21
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
Preliminary deformation model for National Seismic Hazard map of Indonesia
Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z.; Susilo,; Efendi, Joni
2015-04-24
Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.
Simple model relating recombination rates and non-proportional light yield in scintillators
Moses, William W.; Bizarri, Gregory; Singh, Jai; Vasil'ev, Andrey N.; Williams, Richard T.
2008-09-24
We present a phenomenological approach to derive an approximate expression for the local light yield along a track as a function of the rate constants of different kinetic orders of radiative and quenching processes for excitons and electron-hole pairs excited by an incident {gamma}-ray in a scintillating crystal. For excitons, the radiative and quenching processes considered are linear and binary, and for electron-hole pairs a ternary (Auger type) quenching process is also taken into account. The local light yield (Y{sub L}) in photons per MeV is plotted as a function of the deposited energy, -dE/dx (keV/cm) at any point x along the track length. This model formulation achieves a certain simplicity by using two coupled rate equations. We discuss the approximations that are involved. There are a sufficient number of parameters in this model to fit local light yield profiles needed for qualitative comparison with experiment.
2014-09-26
completely external to the system, i.e., recognition and decisions are completely human driven. An important tool in SKETCH is the hash- dictionary look...expectations of dimensions. The following statement provided meaningful guidance on a technique for representing ’expectation’ in this thesis. Chamber ...mutation. 69 dictionary (DD) might incorporate conceptual models into the data store access mechanism more effectively. The dynamic characteristic of the
Hydraulic modeling for lahar hazards at cascades volcanoes
Costa, J.E.
1997-01-01
The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Log-Logistic Proportional Odds Model for Analyzing Infant Mortality in Bangladesh.
Fatima-Tuz-Zahura, Most; Mohammad, Khandoker Akib; Bari, Wasimul
2017-01-01
Log-logistic parametric survival regression model has been used to find out the potential determinants of infant mortality in Bangladesh using the data extracted from Bangladesh Demographic and Health Survey, 2011. First, nonparametric product-limit approach has been used to examine the unadjusted association between infant mortality and covariate of interest. It is found that maternal education, membership of nongovernmental organizations, age of mother at birth, sex of child, size of child at birth, and place of delivery play an important role in reducing the infant mortality, adjusting relevant covariates.
Ghavipanjeh, F; Taylor, C J; Young, P C; Chotai, A
2001-01-01
This paper presents the result of an investigation into the Proportional Integral Plus (PIP) control of nitrate in the second zone of an activated sludge benchmark. A data-based reduced order model is used as the control model and identified using the Simplified Refined Instrumental Variable (SRIV) identification and estimation algorithm. The PIP control design is based on the Non Minimum State Space (NMSS) form and State Variable Feedback (SVF) methodology. The PIP controller is tested against dynamic load disturbances and compared with the response of a well tuned PI controller.
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
FInal Report: First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality
Aberg, Daniel; Sadigh, Babak; Zhou, Fei
2015-01-01
This final report presents work carried out on the project “First Principles Modeling of Mechanisms Underlying Scintillator Non-Proportionality” at Lawrence Livermore National Laboratory during 2013-2015. The scope of the work was to further the physical understanding of the microscopic mechanisms behind scintillator nonproportionality that effectively limits the achievable detector resolution. Thereby, crucial quantitative data for these processes as input to large-scale simulation codes has been provided. In particular, this project was divided into three tasks: (i) Quantum mechanical rates of non-radiative quenching, (ii) The thermodynamics of point defects and dopants, and (iii) Formation and migration of self-trapped polarons. The progress and results of each of these subtasks are detailed.
Numerical Modelling of Extreme Natural Hazards in the Russian Seas
NASA Astrophysics Data System (ADS)
Arkhipkin, Victor; Dobrolyubov, Sergey; Korablina, Anastasia; Myslenkov, Stanislav; Surkova, Galina
2017-04-01
Storm surges and extreme waves are severe natural sea hazards. Due to the almost complete lack of natural observations of these phenomena in the Russian seas (Caspian, Black, Azov, Baltic, White, Barents, Okhotsk, Kara), especially about their formation, development and destruction, they have been studied using numerical simulation. To calculate the parameters of wind waves for the seas listed above, except the Barents Sea, the spectral model SWAN was applied. For the Barents and Kara seas we used WAVEWATCH III model. Formation and development of storm surges were studied using ADCIRC model. The input data for models - bottom topography, wind, atmospheric pressure and ice cover. In modeling of surges in the White and Barents seas tidal level fluctuations were used. They have been calculated from 16 harmonic constant obtained from global atlas tides FES2004. Wind, atmospheric pressure and ice cover was taken from the NCEP/NCAR reanalysis for the period from 1948 to 2010, and NCEP/CFSR reanalysis for the period from 1979 to 2015. In modeling we used both regular and unstructured grid. The wave climate of the Caspian, Black, Azov, Baltic and White seas was obtained. Also the extreme wave height possible once in 100 years has been calculated. The statistics of storm surges for the White, Barents and Azov Seas were evaluated. The contribution of wind and atmospheric pressure in the formation of surges was estimated. The technique of climatic forecast frequency of storm synoptic situations was developed and applied for every sea. The research was carried out with financial support of the RFBR (grant 16-08-00829).
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder.
Nordahl, Helene; Rod, Naja Hulvej; Frederiksen, Birgitte Lidegaard; Andersen, Ingelise; Lange, Theis; Diderichsen, Finn; Prescott, Eva; Overvad, Kim; Osler, Merete
2013-02-01
Educational-related gradients in coronary heart disease (CHD) and mediation by behavioral risk factors are plausible given previous research; however this has not been comprehensively addressed in absolute measures. Questionnaire data on health behavior of 69,513 participants, 52 % women, from seven Danish cohort studies were linked to registry data on education and incidence of CHD. Mediation by smoking, low physical activity, and body mass index (BMI) on the association between education and CHD were estimated by applying newly proposed methods for mediation based on the additive hazards model, and compared with results from the Cox proportional hazards model. Short (vs. long) education was associated with 277 (95 % CI: 219, 336) additional cases of CHD per 100,000 person-years at risk among women, and 461 (95 % CI: 368, 555) additional cases among men. Of these additional cases 17 (95 % CI: 12, 22) for women and 37 (95 % CI: 28, 46) for men could be ascribed to the pathway through smoking. Further, 39 (95 % CI: 30, 49) cases for women and 94 (95 % CI: 79, 110) cases for men could be ascribed to the pathway through BMI. The effects of low physical activity were negligible. Using contemporary methods, the additive hazards model, for mediation we indicated the absolute numbers of CHD cases prevented when modifying smoking and BMI. This study confirms previous claims based on the Cox proportional hazards model that behavioral risk factors partially mediates the effect of education on CHD, and the results seems not to be particularly model dependent.
Zhou, Yu-fei; He, Hong-shi; Bu, Ren-cang; Jin, Long-ru; Li, Xiu-zhen
2008-08-01
With spatially explicit landscape model (LANDIS), the dynamic change of forest landscape in Youhao Forest Bureau in Xiaoxinganling Mountains from 2001-2201 under 5 planting proportions of coniferous and broadleaved species, i.e., 100% broadleaved species, 70% broadleaved and 30% coniferous species, 50% broadleaved and 50% coniferous species, 30% broadleaved and 70% coniferous species, and 100% coniferous species, was studied, taking the forest under natural regeneration after harvesting as the control. The results showed that afforestation effectively promoted the recovery of forest resources, but single planting of coniferous species would lead to the area percent of broadleaved species lower than the control. When broadleaved species were planted only, the area percent of coniferous species was lower than the control. The area percent and aggregation index of Pinus koraiensis and Larix gmelini increased with increasing planting proportion of coniferous species, and those of Quercus mongolica increased with increasing planting proportion of broad-leaved species. Afforestation decreased the area percent of Betula phatyphylla, but had no significant effects on its aggregation index. Different afforestation strategies not only altered the species area percent, but also affected the species spatial pattern.
Landslide-Generated Tsunami Model for Quick Hazard Assessment
NASA Astrophysics Data System (ADS)
Franz, M.; Rudaz, B.; Locat, J.; Jaboyedoff, M.; Podladchikov, Y.
2015-12-01
Alpine regions are likely to be areas at risk regarding to landslide-induced tsunamis, because of the proximity between lakes and potential instabilities and due to the concentration of the population in valleys and on the lakes shores. In particular, dam lakes are often surrounded by steep slopes and frequently affect the stability of the banks. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a 2.5D numerical model which aims to simulate the propagation of the landslide, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. To perform this task, the process is done in three steps. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The proper behavior of our model is demonstrated by; (1) numerical tests from Toro (2001), and (2) by comparison with a real event where the horizontal run-up distance is known (Nicolet landslide, Quebec, Canada). The model is of particular interest due to its ability to perform quickly the 2.5D geometric model of the landslide, the tsunami simulation and, consequently, the hazard assessment.
Research collaboration, hazard modeling and dissemination in volcanology with Vhub
NASA Astrophysics Data System (ADS)
Palma Lizana, J. L.; Valentine, G. A.
2011-12-01
Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors
2001-03-01
and other miscellaneous full-scale test results with solid rocket motors containing high-energy propellants . 14. SUBJECT TERMS 15. NUMBER OF PAGES...Environmental tests Scale models 60 Fire hazards Solid propellant rocket engines 16. PRICE CODE Hazards Test and evaluation Model tests Test methods Safety...similar failure modes. The possible correlation between propellant properties and motor system hazards, especially for high-energy (small critical
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
2016-11-01
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
Schiller, Steven R.; Warren, Mashuri L.; Auslander, David M.
1980-11-01
In this paper, common control strategies used to regulate the flow of liquid through flat-plate solar collectors are discussed and evaluated using a dynamic collector model. Performance of all strategies is compared using different set points, flow rates, insolation levels and patterns, and ambient temperature conditions. The unique characteristic of the dynamic collector model is that it includes the effect of collector capacitance. Short term temperature response and the energy-storage capability of collector capacitance are shown to play significant roles in comparing on/off and proportional controllers. Inclusion of these effects has produced considerably more realistic simulations than any generated by steady-state models. Finally, simulations indicate relative advantages and disadvantages of both types of controllers, conditions under which each performs better, and the importance of pump cycling and controller set points on total energy collection.
A mental models approach to exploring perceptions of hazardous processes
Bostrom, A.H.H.
1990-01-01
Based on mental models theory, a decision-analytic methodology is developed to elicit and represent perceptions of hazardous processes. An application to indoor radon illustrates the methodology. Open-ended interviews were used to elicit non-experts' perceptions of indoor radon, with explicit prompts for knowledge about health effects, exposure processes, and mitigation. Subjects then sorted photographs into radon-related and unrelated piles, explaining their rationale aloud as they sorted. Subjects demonstrated a small body of correct but often unspecific knowledge about exposure and effects processes. Most did not mention radon-decay processes, and seemed to rely on general knowledge about gases, radioactivity, or pollution to make inferences about radon. Some held misconceptions about contamination and health effects resulting from exposure to radon. In two experiments, subjects reading brochures designed according to the author's guidelines outperformed subjects reading a brochure distributed by the EPA on a diagnostic test, and did at least as well on an independently designed quiz. In both experiments, subjects who read any one of the brochures had more complete and correct knowledge about indoor radon than subjects who did not, whose knowledge resembled the radon-interview subjects'.
Evaluating the hazard from Siding Spring dust: Models and predictions
NASA Astrophysics Data System (ADS)
Christou, A.
2014-12-01
Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Modelling Inland Flood Events for Hazard Maps in Taiwan
NASA Astrophysics Data System (ADS)
Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.
2015-12-01
Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage
Conceptual geoinformation model of natural hazards risk assessment
NASA Astrophysics Data System (ADS)
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
Analysis of two-phase sampling data with semiparametric additive hazards models.
Sun, Yanqing; Qian, Xiyuan; Shou, Qiong; Gilbert, Peter B
2017-07-01
Under the case-cohort design introduced by Prentice (Biometrica 73:1-11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
Modelling the costs of natural hazards in games
NASA Astrophysics Data System (ADS)
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban
Expert elicitation for a national-level volcano hazard model
NASA Astrophysics Data System (ADS)
Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill
2016-04-01
The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.
Doubly Robust Additive Hazards Models to Estimate Effects of a Continuous Exposure on Survival.
Wang, Yan; Lee, Mihye; Liu, Pengfei; Shi, Liuhua; Yu, Zhi; Awad, Yara Abu; Zanobetti, Antonella; Schwartz, Joel D
2017-08-19
The effect of an exposure with survival can be biased when the regression model is misspecified. Hazard difference is easier to use in risk assessment than hazard ratio and has a clearer interpretation in the assessment of effect modifications. We proposed two doubly robust additive hazards models to estimate the causal hazard difference of a continuous exposure on survival. The first model is an inverse probability-weighted additive hazards regression. The second model is an extension of the doubly robust estimator for binary exposures by categorizing the continuous exposure. We compared these with the marginal structural model and outcome regression with correct and incorrect model specifications using simulations. We applied doubly robust additive hazard models to the estimation of hazard difference of long-term exposure to PM2.5 on survival using a large cohort of 13 million older adults residing in seven states of the Southeastern US. We demonstrated in theory and simulation studies that the proposed approaches are doubly robust. We found that each one μg m increase in annual PM2.5 exposure is associated with a causal hazard difference in mortality of 8.0 × 10 (95% confidence interval 7.4 × 10, 8.7 × 10), which was modified by age, medical history, socio-economic status, and urbanicity. The overall hazard difference translates to approximately 5.5 (5.1, 6.0) thousand deaths per year in the study population. The proposed approaches improve the robustness of the additive hazards model and produce a novel additive causal estimate of PM2.5 on survival and several additive effect modifications, including social inequality.
The influence of mapped hazards on risk beliefs: a proximity-based modeling approach.
Severtson, Dolores J; Burt, James E
2012-02-01
Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated "you live here" location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, for example, distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples.
NASA Astrophysics Data System (ADS)
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
Kraus, N.N.; Slovic, P.
1988-09-01
Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions.
Brandon M. Collins; Heather A. Kramer; Kurt Menning; Colin Dillingham; David Saah; Peter A. Stine; Scott L. Stephens
2013-01-01
We built on previous work by performing a more in-depth examination of a completed landscape fuel treatment network. Our specific objectives were: (1) model hazardous fire potential with and without the treatment network, (2) project hazardous fire potential over several decades to assess fuel treatment network longevity, and (3) assess fuel treatment effectiveness and...
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis
2016-01-01
There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586
Mozo, I; Lesage, G; Yin, J; Bessiere, Y; Barna, L; Sperandio, M
2012-10-15
The aerobic biological process is one of the best technologies available for removing hazardous organic substances from industrial wastewaters. But in the case of volatile organic compounds (benzene, toluene, ethylbenzene, p-xylene, naphthalene), volatilization can contribute significantly to their removal from the liquid phase. One major issue is to predict the competition between volatilization and biodegradation in biological process depending on the target molecule. The aim of this study was to develop an integrated dynamic model to evaluate the influence of operating conditions, kinetic parameters and physical properties of the molecule on the main pathways (biodegradation and volatilization) for the removal of Volatile Organic Compounds (VOC). After a comparison with experimental data, sensitivity studies were carried out in order to optimize the aerated biological process. Acclimatized biomass growth is limited by volatilization, which reduces the bioavailability of the substrate. Moreover, the amount of biodegraded substrate is directly proportional to the amount of active biomass stabilized in the process. Model outputs predict that biodegradation is enhanced at high SRT for molecules with low H and with a high growth rate population. Air flow rate should be optimized to meet the oxygen demand and to minimize VOC stripping. Finally, the feeding strategy was found to be the most influential operating parameter that should be adjusted in order to enhance VOC biodegradation and to limit their volatilization in sequencing batch reactors (SBR).
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
Occupational hazard evaluation model underground coal mine based on unascertained measurement theory
NASA Astrophysics Data System (ADS)
Deng, Quanlong; Jiang, Zhongan; Sun, Yaru; Peng, Ya
2017-05-01
In order to study how to comprehensively evaluate the influence of several occupational hazard on miners’ physical and mental health, based on unascertained measurement theory, occupational hazard evaluation indicator system was established to make quantitative and qualitative analysis. Determining every indicator weight by information entropy and estimating the occupational hazard level by credible degree recognition criteria, the evaluation model was programmed by Visual Basic, applying the evaluation model to occupational hazard comprehensive evaluation of six posts under a coal mine, and the occupational hazard degree was graded, the evaluation results are consistent with actual situation. The results show that dust and noise is most obvious among the coal mine occupational hazard factors. Excavation face support workers are most affected, secondly, heading machine drivers, coal cutter drivers, coalface move support workers, the occupational hazard degree of these four types workers is II mild level. The occupational hazard degree of ventilation workers and safety inspection workers is I level. The evaluation model could evaluate underground coal mine objectively and accurately, and can be employed to the actual engineering.
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Odbert, Henry; Phillips, Jeremy; Woodhouse, Mark; Sandri, Laura; Selva, Jacopo; Marzocchi, Warner
2016-04-01
Quantification of volcanic hazards is a challenging task for modern volcanology. Assessing the large uncertainties involved in the hazard analysis requires the combination of volcanological data, physical and statistical models. This is a complex procedure even when taking into account only one type of volcanic hazard. However, volcanic systems are known to be multi-hazard environments where several hazardous phenomena (tephra fallout, Pyroclastic Density Currents -PDCs-, lahars, etc.) may occur whether simultaneous or sequentially. Bayesian Belief Networks (BBNs) are a flexible and powerful way of modelling uncertainty. They are statistical models that can merge information coming from data, physical models, other statistical models or expert knowledge into a unified probabilistic assessment. Therefore, they can be applied to model the interaction between different volcanic hazards in an efficient manner. In this work, we design and preliminarily parametrize a BBN with the aim of forecasting the occurrence and volume of rain-triggered lahars when considering: (1) input of pyroclastic material, in the form of tephra fallout and PDCs, over the catchments around the volcano; (2) remobilization of this material by antecedent lahar events. Input of fresh pyroclastic material can be modelled through a combination of physical models (e.g. advection-diffusion models for tephra fallout such as HAZMAP and shallow-layer continuum models for PDCs such as Titan2D) and uncertainty quantification techniques, while the remobilization efficiency can be constrained from datasets of lahar observations at different volcanoes. The applications of this kind of probabilistic multi-hazard approach can range from real-time forecasting of lahar activity to calibration of physical or statistical models (e.g. emulators) for long-term volcanic hazard assessment.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
NASA Astrophysics Data System (ADS)
Blahut, J.; Horton, P.; Sterlacchini, S.; Jaboyedoff, M.
2010-11-01
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
NASA Astrophysics Data System (ADS)
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-06-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Castronovo, A Margherita; Negro, Francesco; Farina, Dario
2015-01-01
Motor neurons in the spinal cord receive synaptic input that comprises common and independent components. The part of synaptic input that is common to all motor neurons is the one regulating the production of force. Therefore, its quantification is important to assess the strategy used by Central Nervous System (CNS) to control and regulate movements, especially in physiological conditions such as fatigue. In this study we present and validate a method to estimate the ratio between strengths of common and independent inputs to motor neurons and we apply this method to investigate its changes during fatigue. By means of coherence analysis we estimated the level of correlation between motor unit spike trains at the beginning and at the end of fatiguing contractions of the Tibialis Anterior muscle at three different force targets. Combining theoretical modeling and experimental data we estimated the strength of the common synaptic input with respect to the independent one. We observed a consistent increase in the proportion of the shared input to motor neurons during fatigue. This may be interpreted as a strategy used by the CNS to counteract the occurrence of fatigue and the concurrent decrease of generated force.
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-06-07
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen's neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Wang, Junsong; Niebur, Ernst; Hu, Jinyu; Li, Xiaoli
2016-01-01
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen’s neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme. PMID:27273563
NASA Astrophysics Data System (ADS)
Wang, Jun-Song; Wang, Mei-Li; Li, Xiao-Li; Ernst, Niebur
2015-03-01
Epilepsy is believed to be caused by a lack of balance between excitation and inhibitation in the brain. A promising strategy for the control of the disease is closed-loop brain stimulation. How to determine the stimulation control parameters for effective and safe treatment protocols remains, however, an unsolved question. To constrain the complex dynamics of the biological brain, we use a neural population model (NPM). We propose that a proportional-derivative (PD) type closed-loop control can successfully suppress epileptiform activities. First, we determine the stability of root loci, which reveals that the dynamical mechanism underlying epilepsy in the NPM is the loss of homeostatic control caused by the lack of balance between excitation and inhibition. Then, we design a PD type closed-loop controller to stabilize the unstable NPM such that the homeostatic equilibriums are maintained; we show that epileptiform activities are successfully suppressed. A graphical approach is employed to determine the stabilizing region of the PD controller in the parameter space, providing a theoretical guideline for the selection of the PD control parameters. Furthermore, we establish the relationship between the control parameters and the model parameters in the form of stabilizing regions to help understand the mechanism of suppressing epileptiform activities in the NPM. Simulations show that the PD-type closed-loop control strategy can effectively suppress epileptiform activities in the NPM. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473208, 61025019, and 91132722), ONR MURI N000141010278, and NIH grant R01EY016281.
Yan, Fang; Xu, Kaili
2017-01-01
Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific. PMID:28076440
Yan, Fang; Xu, Kaili
2017-01-01
Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific.
NASA Astrophysics Data System (ADS)
Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.
2017-07-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.
A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards
NASA Astrophysics Data System (ADS)
Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.
Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show
Stirling, M.; Petersen, M.
2006-01-01
We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.
Snakes as hazards: modelling risk by chasing chimpanzees.
McGrew, William C
2015-04-01
Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.
NASA Astrophysics Data System (ADS)
Dube, F.; Nhapi, I.; Murwira, A.; Gumindoga, W.; Goldin, J.; Mashauri, D. A.
Gully erosion is an environmental concern particularly in areas where landcover has been modified by human activities. This study assessed the extent to which the potential of gully erosion could be successfully modelled as a function of seven environmental factors (landcover, soil type, distance from river, distance from road, Sediment Transport Index (STI), Stream Power Index (SPI) and Wetness Index (WI) using a GIS-based Weight of Evidence Modelling (WEM) in the Mbire District of Zimbabwe. Results show that out of the studied seven factors affecting gully erosion, five were significantly correlated (p < 0.05) to gully occurrence, namely; landcover, soil type, distance from river, STI and SPI. Two factors; WI and distance from road were not significantly correlated to gully occurrence (p > 0.05). A gully erosion hazard map showed that 78% of the very high hazard class area is within a distance of 250 m from rivers. Model validation indicated that 70% of the validation set of gullies were in the high hazard and very high hazard class. The resulting map of areas susceptible to gully erosion has a prediction accuracy of 67.8%. The predictive capability of the weight of evidence model in this study suggests that landcover, soil type, distance from river, STI and SPI are useful in creating a gully erosion hazard map but may not be sufficient to produce a valid map of gully erosion hazard.
[Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].
Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang
2014-02-01
In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and
A New Semiparametric Estimation Method for Accelerated Hazards Mixture Cure Model
Zhang, Jiajia; Peng, Yingwei; Li, Haifen
2012-01-01
The semiparametric accelerated hazards mixture cure model provides a useful alternative to analyze survival data with a cure fraction if covariates of interest have a gradual effect on the hazard of uncured patients. However, the application of the model may be hindered by the computational intractability of its estimation method due to non-smooth estimating equations involved. We propose a new semiparametric estimation method based on a smooth estimating equation for the model and demonstrate that the new method makes the parameter estimation more tractable without loss of efficiency. The proposed method is used to fit the model to a SEER breast cancer data set. PMID:23293406
NASA Astrophysics Data System (ADS)
Loughlin, Susan
2013-04-01
GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Proportional Reasoning as Essential Numeracy
ERIC Educational Resources Information Center
Dole, Shelley; Hilton, Annette; Hilton, Geoff
2015-01-01
This paper reports an aspect of a large research and development project that aimed to promote middle years school teachers' understanding and awareness of the pervasiveness of proportional reasoning as integral to numeracy. Teacher survey data of proportional reasoning across the curriculum were mapped on to a rich model of numeracy. Results…
ERIC Educational Resources Information Center
2003
This study analyzed the economic benefits of an increase in the proportion of Australian students achieving a 12th-grade equivalent education. Earlier research examined the direct costs and benefits of a program that increased 12th grade equivalent education for the five-year cohort 2003-2007. This study built on that by incorporating the indirect…
NASA Astrophysics Data System (ADS)
Tjoe, Hartono; de la Torre, Jimmy
2014-06-01
In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the deliberation and resolution of differing views by mathematics researchers, mathematics educators, and middle school mathematics teachers of what should be learned theoretically and what can be taught practically in everyday classroom settings. We also present the initial development of proportional reasoning items as part of the two-phase validation process of the previously identified attributes. In particular, we detail in the first phase of the validation process our collaboration with middle school mathematics teachers in the creation of prototype items and the verification of each item-attribute specification in consideration of the most common ways (among many different ways) in which middle school students would have solved these prototype items themselves. In the second phase of the validation process, we elaborate our think-aloud interview procedure in the search for evidence of whether students generally solved the prototype items in the way they were expected to.
LNG fires: a review of experimental results, models and hazard prediction challenges.
Raj, Phani K
2007-02-20
A number of experimental investigations of LNG fires (of sizes 35 m diameter and smaller) were undertaken, world wide, during the 1970s and 1980s to study their physical and radiative characteristics. This paper reviews the published data from several of these tests including from the largest test to date, the 35 m, Montoir tests. Also reviewed in this paper is the state of the art in modeling LNG pool and vapor fires, including thermal radiation hazard modeling. The review is limited to considering the integral and semi-empirical models (solid flame and point source); CFD models are not reviewed. Several aspects of modeling LNG fires are reviewed including, the physical characteristics, such as the (visible) fire size and shape, tilt and drag in windy conditions, smoke production, radiant thermal output, etc., and the consideration of experimental data in the models. Comparisons of model results with experimental data are indicated and current deficiencies in modeling are discussed. The requirements in the US and European regulations related to LNG fire hazard assessment are reviewed, in brief, in the light of model inaccuracies, criteria for hazards to people and structures, and the effects of mitigating circumstances. The paper identifies: (i) critical parameters for which there exist no data, (ii) uncertainties and unknowns in modeling and (iii) deficiencies and gaps in current regulatory recipes for predicting hazards.
The influence of hazard models on GIS-based regional risk assessments and mitigation policies
Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.
2006-01-01
Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.
2012-01-01
In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.
Three multimedia models used at hazardous and radioactive waste sites
Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.
1996-02-01
Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.
Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America
NASA Astrophysics Data System (ADS)
Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.
2014-12-01
Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Eble, M. C.
2013-12-01
The U.S. National Tsunami Hazard Mitigation Program (NTHMP) is comprised of representatives from coastal states and federal agencies who, under the guidance of NOAA, work together to develop protocols and products to help communities prepare for and mitigate tsunami hazards. Within the NTHMP are several subcommittees responsible for complimentary aspects of tsunami assessment, mitigation, education, warning, and response. The Mapping and Modeling Subcommittee (MMS) is comprised of state and federal scientists who specialize in tsunami source characterization, numerical tsunami modeling, inundation map production, and warning forecasting. Until September 2012, much of the work of the MMS was authorized through the Tsunami Warning and Education Act, an Act that has since expired but the spirit of which is being adhered to in parallel with reauthorization efforts. Over the past several years, the MMS has developed guidance and best practices for states and territories to produce accurate and consistent tsunami inundation maps for community level evacuation planning, and has conducted benchmarking of numerical inundation models. Recent tsunami events have highlighted the need for other types of tsunami hazard analyses and products for improving evacuation planning, vertical evacuation, maritime planning, land-use planning, building construction, and warning forecasts. As the program responsible for producing accurate and consistent tsunami products nationally, the NTHMP-MMS is initiating a multi-year plan to accomplish the following: 1) Create and build on existing demonstration projects that explore new tsunami hazard analysis techniques and products, such as maps identifying areas of strong currents and potential damage within harbors as well as probabilistic tsunami hazard analysis for land-use planning. 2) Develop benchmarks for validating new numerical modeling techniques related to current velocities and landslide sources. 3) Generate guidance and protocols for
Strip Diagrams: Illuminating Proportions
ERIC Educational Resources Information Center
Cohen, Jessica S.
2013-01-01
Proportional reasoning is both complex and layered, making it challenging to define. Lamon (1999) identified characteristics of proportional thinkers, such as being able to understand covariance of quantities; distinguish between proportional and nonproportional relationships; use a variety of strategies flexibly, most of which are nonalgorithmic,…
Sensitivity, testing and validation of multiple seismic hazard models of Italy
NASA Astrophysics Data System (ADS)
Romeo, R. W.
2009-04-01
The results of a probabilistic seismic hazard analysis of Italy, achieved according to scientifically accepted methodologies, updated information and well-documented data processing, are shown. The hazard assessment is carried out according to Cornell's method (1968), based on an earthquake catalogue with the foreshock and aftershock events filtered out, and on three different types of seismic sources: macro-areas, seismogenic zones and single points (seismic epicentres). Peak Ground Acceleration (PGA) and Spectral Acceleration (SA) values at two fixed frequencies (1 and 5 Hz) are computed using two sets of attenuation equations: the attenuation relationships proposed by Sabetta and Pugliese (1996) on the basis of strong motion recordings of Italian earthquakes and the attenuation relationships proposed by Ambraseys et al. (1996) based on strong motion recordings of European earthquakes. A Poisson model of earthquake occurrence is assumed as a default and three return periods are investigated, 100, 500 and 1000 years. For validation purposes, seismic hazard estimates are then compared with those obtained by Albarello and D'Amico (2001) using a different seismic database and procedure. They were based on the seismic intensities felt in 2,579 municipalities, giving thus the opportunity to compare, for the same sites, the hazard obtained through two alternative methods, a direct one (felt intensities) versus a derived one (strong motion estimates). Furthermore, a panel of seismic hazard experts has been solicited to provide their relative confidence in the alternative models used in this work as it regards seismic source models, seismicity rate models and attenuation models. A sensitivity analysis has been also performed, to determine those models that mostly influence the results. A hazard model, which is a synthesis of the previous ones, is finally proposed: it represents the best fit of the results of the sensitivity analyses and validation tests.
Modelling tropical cyclone hazards under climate change scenario using geospatial techniques
NASA Astrophysics Data System (ADS)
Hoque, M. A.; Phinn, S.; Roelfsema, C.; Childs, I.
2016-11-01
Tropical cyclones are a common and devastating natural disaster in many coastal areas of the world. As the intensity and frequency of cyclones will increase under the most likely future climate change scenarios, appropriate approaches at local scales (1-5 km) are essential for producing sufficiently detailed hazard models. These models are used to develop mitigation plans and strategies for reducing the impacts of cyclones. This study developed and tested a hazard modelling approach for cyclone impacts in Sarankhola upazila, a 151 km2 local government area in coastal Bangladesh. The study integrated remote sensing, spatial analysis and field data to model cyclone generated hazards under a climate change scenario at local scales covering < 1000 km2. A storm surge model integrating historical cyclone data and Digital Elevation Model (DEM) was used to generate the cyclone hazard maps for different cyclone return periods. Frequency analysis was carried out using historical cyclone data (1960--2015) to calculate the storm surge heights of 5, 10, 20, 50 and 100 year return periods of cyclones. Local sea level rise scenario of 0.34 m for the year 2050 was simulated with 20 and 50 years return periods. Our results showed that cyclone affected areas increased with the increase of return periods. Around 63% of study area was located in the moderate to very high hazard zones for 50 year return period, while it was 70% for 100 year return period. The climate change scenarios increased the cyclone impact area by 6-10 % in every return period. Our findings indicate this approach has potential to model the cyclone hazards for developing mitigation plans and strategies to reduce the future impacts of cyclones.
Modeling exposure to persistent chemicals in hazard and risk assessment.
Cowan-Ellsberry, Christina E; McLachlan, Michael S; Arnot, Jon A; Macleod, Matthew; McKone, Thomas E; Wania, Frank
2009-10-01
Fate and exposure modeling has not, thus far, been explicitly used in the risk profile documents prepared for evaluating the significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of persistent organic pollutants (POP) and persistent, bioaccumulative, and toxic (PBT) chemicals in the environment. The goal of this publication is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include 1) benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk; 2) directly estimating the exposure of the environment, biota, and humans to provide information to complement measurements or where measurements are not available or are limited; 3) to identify the key processes and chemical or environmental parameters that determine the exposure, thereby allowing the effective prioritization of research or measurements to improve the risk profile; and 4) forecasting future time trends, including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and
Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment
Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank
2008-11-01
Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0
NASA Astrophysics Data System (ADS)
Vidar Vangelsten, Bjørn; Fornes, Petter; Cepeda, Jose Mauricio; Ekseth, Kristine Helene; Eidsvig, Unni; Ormukov, Cholponbek
2015-04-01
Landslides are a significant threat to human life and the built environment in many parts of Central Asia. To improve understanding of the magnitude of the threat and propose appropriate risk mitigation measures, landslide hazard mapping is needed both at regional and local level. Many different approaches for landslide hazard mapping exist depending on the scale and purpose of the analysis and what input data are available. This paper presents a probabilistic local scale landslide hazard mapping methodology for rainfall triggered landslides, adapted to the relatively dry climate found in South-Western Kyrgyzstan. The GIS based approach makes use of data on topography, geology, land use and soil characteristics to assess landslide susceptibility. Together with a selected rainfall scenario, these data are inserted into a triggering model based on an infinite slope formulation considering pore pressure and suction effects for unsaturated soils. A statistical model based on local landslide data has been developed to estimate landslide run-out. The model links the spatial extension of the landslide to land use and geological features. The model is tested and validated for the town of Suluktu in the Ferghana Valley in South-West Kyrgyzstan. Landslide hazard is estimated for the urban area and the surrounding hillsides. The case makes use of a range of data from different sources, both remote sensing data and in-situ data. Public global data sources are mixed with case specific data obtained from field work. The different data and models have various degrees of uncertainty. To account for this, the hazard model has been inserted into a Monte Carlo simulation framework to produce a probabilistic landslide hazard map identifying areas with high landslide exposure. The research leading to these results has received funding from the European Commission's Seventh Framework Programme [FP7/2007-2013], under grant agreement n° 312972 "Framework to integrate Space-based and in
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Li, Tianyi; Zhai, Xuan; Jiang, Jinqiu; Song, Xiaojie; Han, Wei; Ma, Jiannan; Xie, Lingling; Cheng, Li; Chen, Hengsheng; Jiang, Li
2017-02-15
Recent studies have reported microglia that are activated in the central nervous system (CNS) in patients with temporal lobe epilepsy and animal models of epilepsy. However, limited data are available on the dynamic changes of the proportions of various phenotypes of microglia throughout epileptogenesis and whether IL-4/IFN-γ administration can modulate the proportions of microglial phenotypes to affect the outcome of epilepsy. The current study examined this issue using a mouse model of pilocarpine-induced epilepsy. Flow cytometry showed that classically activated microglia (M1) and alternatively activated microglia (M2) underwent variations throughout the stages of epileptogenesis. The altered trends in the microglia-associated cytokines IL-1β, IL-4, and IL-10 paralleled the changes in phenotype proportions. We found that intraperitoneal injections of IL-4 and IFN-γ, which have been reported to modulate the phenotypes of microglia in vitro, also affected the proportion of microglia in vivo. In addition, correctly timing the modulation of the proportion of microglia improved the outcomes of epilepsy based on the reduced frequency, duration, and severity of spontaneous recurrent seizures (SRS) and increased the performances of the mice in the Morris water maze. This study is the first to report altering the proportion of microglial phenotypes in pilocarpine-induced epileptogenesis. Intraperitoneal injection of IL-4/IFN-γ could be used to modulate the proportions of the types of microglia, and epilepsy outcomes could be improved by correctly timing this modulation of phenotypes. Copyright © 2016 Elsevier B.V. All rights reserved.
Lava flow modelling in long and short-term hazard assessment
NASA Astrophysics Data System (ADS)
Martí, Joan; Becerril, Laura; Bartolini, Stefania
2017-04-01
Lava flows constitute the commonest volcano hazard resulting from a non-explosive eruption, especially in basaltic systems. These flows come in many shapes and sizes and have a wide range of surface morphology (pahoehoe, aa, blocky, etc.) whose differences are mainly controlled by variations in magma viscosity and supply rates at the time of the eruption. The principal constraint on lava emplacement is topography and so flows will tend to invade the lowest-lying areas. Modelling such complex non-Newtonian flows is not an easy task, as many of the parameters required to precisely define flow behaviour are not known. This is one of the reasons, in addition to the required high computing cost, for which deterministic models are not preferred when conducting long and short term hazard assessment. On the contrary, probabilistic models, despite being much less precise, offer a rapid approach to lava flow invasion and fulfil the main needs required in lava flow hazard analysis, with a much less computational demand and, consequently, offering a much wider applicability. In this contribution we analyse the main problems that exist in lava flow modelling, compare between deterministic and probabilistic models, and show the application of probabilistic models in long and short-term hazard assessment. This contribution is part of the EC ECHO SI2.695524:VeTOOLS and EPOS-IP AMD-676564-42 Grants
Measurements and Models for Hazardous chemical and Mixed Wastes
Laurel A. Watts; Cynthia D. Holcomb; Stephanie L. Outcalt; Beverly Louie; Michael E. Mullins; Tony N. Rogers
2002-08-21
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the DOE sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system o water + acetone + 2-propanol + NaNo3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)
2001-01-01
Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated
Modeling downwind hazards after an accidental release of chlorine trifluoride
Lombardi, D.A.; Cheng, Meng-Dawn
1996-05-01
A module simulating ClF{sub 3} chemical reactions with water vapor and thermodynamic processes in the atmosphere after an accidental release has been developed. This module was liked to the HGSYSTEM. Initial model runs simulate the rapid formation of HF and ClO{sub 2} after an atmospheric release of ClF{sub 3}. At distances beyond the first several meters from the release point, HF and ClO{sub 2} concentrations pose a greater threat to human health than do ClF{sub 3} concentrations. For most of the simulations, ClF{sub 3} concentrations rapidly fall below the IDLH. Fro releases occurring in ambient conditions with low relative humidity and/or ambient temperature, ClF{sub 3} concentrations exceed the IDLH up to almost 500 m. The performance of this model needs to be determined for potential release scenarios that will be considered. These release scenarios are currently being developed.
Medical Modeling of Particle Size Effects for CB Inhalation Hazards
2015-09-01
can contain no organisms. As the particle size decreases toward that of an organism (~1 micron for F. tularensis bacteria ), some particles may...set of lung morphologies is also available. The model can calculate deposition in three regions, extrathoracic (ET), tracheobroncial (TB) and...the former type (Day and Berendt, 1972) and spores of B. anthracis are of the latter type (Druett et al., 1953). Bacteria are fairly large, on the
Large area application of a corn hazard model. [Soviet Union
NASA Technical Reports Server (NTRS)
Ashburn, P.; Taylor, T. W. (Principal Investigator)
1981-01-01
An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.
Measurement and Model for Hazardous Chemical and Mixed Waste
Michael E. Mullins; Tony N. Rogers; Stephanie L. Outcalt; Beverly Louie; Laurel A. Watts; Cynthia D. Holcomb
2002-07-30
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the Department of Energy (DOE) sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system of water + acetone + 2-propanol + NaNO3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.
2015-01-01
The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after
Fitting additive hazards models for case-cohort studies: a multiple imputation approach.
Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook
2016-07-30
In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Modeling geomagnetic induction hazards using a 3-D electrical conductivity model of Australia
NASA Astrophysics Data System (ADS)
Wang, Liejun; Lewis, Andrew M.; Ogawa, Yasuo; Jones, William V.; Costelloe, Marina T.
2016-12-01
The surface electric field induced by external geomagnetic source fields is modeled for a continental-scale 3-D electrical conductivity model of Australia at periods of a few minutes to a few hours. The amplitude and orientation of the induced electric field at periods of 360 s and 1800 s are presented and compared to those derived from a simplified ocean-continent (OC) electrical conductivity model. It is found that the induced electric field in the Australian region is distorted by the heterogeneous continental electrical conductivity structures and surrounding oceans. On the northern coastlines, the induced electric field is decreased relative to the simple OC model due to a reduced conductivity contrast between the seas and the enhanced conductivity structures inland. In central Australia, the induced electric field is less distorted with respect to the OC model as the location is remote from the oceans, but inland crustal high-conductivity anomalies are the major source of distortion of the induced electric field. In the west of the continent, the lower conductivity of the Western Australia Craton increases the conductivity contrast between the deeper oceans and land and significantly enhances the induced electric field. Generally, the induced electric field in southern Australia, south of latitude -20°, is higher compared to northern Australia. This paper provides a regional indicator of geomagnetic induction hazards across Australia.
Markitsis, Anastasios; Lai, Yinglei
2010-01-01
Motivation: The proportion of non-differentially expressed genes (π0) is an important quantity in microarray data analysis. Although many statistical methods have been proposed for its estimation, it is still necessary to develop more efficient methods. Methods: Our approach for improving π0 estimation is to modify an existing simple method by introducing artificial censoring to P-values. In a comprehensive simulation study and the applications to experimental datasets, we compare our method with eight existing estimation methods. Results: The simulation study confirms that our method can clearly improve the estimation performance. Compared with the existing methods, our method can generally provide a relatively accurate estimate with relatively small variance. Using experimental microarray datasets, we also demonstrate that our method can generally provide satisfactory estimates in practice. Availability: The R code is freely available at http://home.gwu.edu/~ylai/research/CBpi0/. Contact: ylai@gwu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20080506
Smoothed Seismicity Models for the 2016 Italian Probabilistic Seismic Hazard Map
NASA Astrophysics Data System (ADS)
Akinci, A.; Moschetti, M. P.; Taroni, M.
2016-12-01
For the first time, the 2016 Italian Probabilistic Seismic Hazard maps will incorporate smoothed seismicity models as a part of the earthquake rate forecast. In this study we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing methods for the Italian Hazard Maps. The approach of using spatially-smoothed historical seismicity is different from the one used previously by Working Group MSP04 (2004), and Slejko et al. (1998) for Italy, in which source zones were drawn around the seismicity and the tectonic provinces and is the first to be used for the new probabilistic seismic hazard maps for Italy. We develop two different smoothed seismicity models using fixed (Frankel 1995) and adaptive smoothing methods (Helmstetter et al., 2007) and compare the resulting models (Werner et al., 2011; Moschetti, 2014) by calculating and evaluating the joint likelihood test. The smoothed seismicity models are constructed from the new, historical CPTI15 and instrumental Italian earthquake catalogues and associated completeness levels to produce a space-time forecast of future Italian seismicity. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We also compare these two models with the Italian CSEP experiment models, to check their relative performances.
NASA Astrophysics Data System (ADS)
Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.
2015-12-01
Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.
Korenromp, Eline L; Bakker, Roel; De Vlas, Sake J; Robinson, N Jamie; Hayes, Richard; Habbema, J Dik F
2002-04-01
The proportion of cases of genital ulcer disease attributable to herpes simplex virus type 2 (HSV-2) appears to be increasing in sub-Saharan Africa. To assess the contributions of HIV disease and behavioral response to the HIV epidemic to the increasing proportion of genital ulcer disease (GUD) attributable to HSV-2 in sub-Saharan Africa. Simulations of the transmission dynamics of ulcerative sexually transmitted diseases (STDs) and HIV with use of the model STDSIM. In simulations, 28% of GUD was caused by HSV-2 before a severe HIV epidemic. If HIV disease was assumed to double the duration and frequency of HSV-2 recurrences, this proportion rose to 35% by year 2000. If stronger effects of HIV were assumed, this proportion rose further, but because of increased HSV-2 transmission this would shift the peak in HSV-2 seroprevalence to an unrealistically young age. A simulated 25% reduction in partner-change rates increased the proportion of GUD caused by HSV-2 to 56%, following relatively large decreases in chancroid and syphilis. Behavioral change may make an important contribution to relative increases in genital herpes.
DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS
The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...
NASA Astrophysics Data System (ADS)
Gochis, E. E.; Lechner, H. N.; Brill, K. A.; Lerner, G.; Ramos, E.
2014-12-01
Graduate students at Michigan Technological University developed the "Landslides!" activity to engage middle & high school students participating in summer engineering programs in a hands-on exploration of geologic engineering and STEM (Science, Technology, Engineering and Math) principles. The inquiry-based lesson plan is aligned to Next Generation Science Standards and is appropriate for 6th-12th grade classrooms. During the activity students focus on the factors contributing to landslide development and engineering practices used to mitigate hazards of slope stability hazards. Students begin by comparing different soil types and by developing predictions of how sediment type may contribute to differences in slope stability. Working in groups, students then build tabletop hill-slope models from the various materials in order to engage in evidence-based reasoning and test their predictions by adding groundwater until each group's modeled slope fails. Lastly students elaborate on their understanding of landslides by designing 'engineering solutions' to mitigate the hazards observed in each model. Post-evaluations from students demonstrate that they enjoyed the hands-on nature of the activity and the application of engineering principles to mitigate a modeled natural hazard.
DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS
The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren Eda; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes ( M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Prediction of earthquake hazard by hidden Markov model (around Bilecik, NW Turkey)
NASA Astrophysics Data System (ADS)
Can, Ceren; Ergun, Gul; Gokceoglu, Candan
2014-09-01
Earthquakes are one of the most important natural hazards to be evaluated carefully in engineering projects, due to the severely damaging effects on human-life and human-made structures. The hazard of an earthquake is defined by several approaches and consequently earthquake parameters such as peak ground acceleration occurring on the focused area can be determined. In an earthquake prone area, the identification of the seismicity patterns is an important task to assess the seismic activities and evaluate the risk of damage and loss along with an earthquake occurrence. As a powerful and flexible framework to characterize the temporal seismicity changes and reveal unexpected patterns, Poisson hidden Markov model provides a better understanding of the nature of earthquakes. In this paper, Poisson hidden Markov model is used to predict the earthquake hazard in Bilecik (NW Turkey) as a result of its important geographic location. Bilecik is in close proximity to the North Anatolian Fault Zone and situated between Ankara and Istanbul, the two biggest cites of Turkey. Consequently, there are major highways, railroads and many engineering structures are being constructed in this area. The annual frequencies of earthquakes occurred within a radius of 100 km area centered on Bilecik, from January 1900 to December 2012, with magnitudes (M) at least 4.0 are modeled by using Poisson-HMM. The hazards for the next 35 years from 2013 to 2047 around the area are obtained from the model by forecasting the annual frequencies of M ≥ 4 earthquakes.
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
NASA Astrophysics Data System (ADS)
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Joint Modeling of Covariates and Censoring Process Assuming Non-Constant Dropout Hazard.
Jaffa, Miran A; Jaffa, Ayad A
2016-06-01
In this manuscript we propose a novel approach for the analysis of longitudinal data that have informative dropout. We jointly model the slopes of covariates of interest and the censoring process for which we assume a survival model with logistic non-constant dropout hazard in a likelihood function that is integrated over the random effects. Maximization of the marginal likelihood function results in acquiring maximum likelihood estimates for the population slopes and empirical Bayes estimates for the individual slopes that are predicted using Gaussian quadrature. Our simulation study results indicated that the performance of this model is superior in terms of accuracy and validity of the estimates compared to other models such as logistic non-constant hazard censoring model that does not include covariates, logistic constant censoring model with covariates, bootstrapping approach as well as mixed models. Sensitivity analyses for the dropout hazard and non-Gaussian errors were also undertaken to assess robustness of the proposed approach to such violations. Our model was illustrated using a cohort of renal transplant patients with estimated glomerular filtration rate as the outcome of interest.
How new fault data and models affect seismic hazard results? Examples from southeast Spain
NASA Astrophysics Data System (ADS)
Gaspar-Escribano, Jorge M.; Belén Benito, M.; Staller, Alejandra; Ruiz Barajas, Sandra; Quirós, Ligia E.
2016-04-01
In this work, we study the impact of different approaches to incorporate faults in a seismic hazard assessment analysis. Firstly, we consider two different methods to distribute the seismicity of the study area into faults and area-sources, based on magnitude partitioning and on moment rate distribution. We use two recurrence models to characterize fault activity: the characteristic earthquake model and the modified Gutenberg-Richter exponential frequency-magnitude distribution. An application of the work is developed in the region of Murcia (southeastern Spain), due to the availability of fault data and because is one of the areas in Spain with higher seismic hazard. The parameters used to model fault sources are derived from paleoseismological and field studies obtained from the literature and online repositories. Additionally, for some significant faults only, geodetically-derived slip rates are used to compute recurrence periods. The results of all the seismic hazard computations carried out using different models and data are represented in maps of expected peak ground accelerations for a return period of 475 years. Maps of coefficients of variation are presented to constraint the variability of the end-results to different input models and values. Additionally, the different hazard maps obtained in this study are compared with the seismic hazard maps obtained in previous work for the entire Spanish territory and more specifically for the region of Murcia. This work is developed in the context of the MERISUR project (ref. CGL2013-40492-R), with funding from the Spanish Ministry of Economy and Competitiveness.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.
2015-12-01
We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.
Using the RBFN model and GIS technique to assess wind erosion hazards of Inner Mongolia, China
NASA Astrophysics Data System (ADS)
Shi, Huading; Liu, Jiyuan; Zhuang, Dafang; Hu, Yunfeng
2006-08-01
Soil wind erosion is the primary process and the main driving force for land desertification and sand-dust storms in arid and semi-arid areas of Northern China. Many researchers have paid more attention to this issue. This paper select Inner Mongolia autonomous region as the research area, quantify the various indicators affecting the soil wind erosion, using the GIS technology to extract the spatial data, and construct the RBFN (Radial Basis Function Network) model for assessment of wind erosion hazard. After training the sample data of the different levels of wind erosion hazard, we get the parameters of the model, and then assess the wind erosion hazard. The result shows that in the Southern parts of Inner Mongolia wind erosion hazard are very severe, counties in the middle regions of Inner Mongolia vary from moderate to severe, and in eastern are slight. The comparison of the result with other researches shows that the result is in conformity with actual conditions, proving the reasonability and applicability of the RBFN model.
1993-03-01
consistent with the initial concentration at the source GPN INPUT DATA FOR Desert Tortoise CHEDIC&L RELEASED Anhydrous Amonia TRIAL : OTi 0T2 Dŗ DT4...phosgene (COC12 ), anhydrous ammonia (NH3 ), chlorine (Cl 2), sulfur dioxide (S02), hydrogen sulfide (H 2S), fluorine (F ), and hydrogen fluoride (HF...D.N., Yohn, J.F., Koopman R.P. and Brown T.C., "Conduct of Anhydrous Hydrofluoric Acid Spill Experiments," Proc. Int. Cqnf. On Vapor Cloud Modeling
Jalkanen, Ville; Andersson, Britt M; Bergh, Anders; Ljungberg, Börje; Lindahl, Olof A
2006-12-01
Prostate cancer is the most common type of cancer in men in Europe and the US. The methods to detect prostate cancer are still precarious and new techniques are needed. A piezoelectric transducer element in a feedback system is set to vibrate with its resonance frequency. When the sensor element contacts an object a change in the resonance frequency is observed, and this feature has been utilized in sensor systems to describe physical properties of different objects. For medical applications it has been used to measure stiffness variations due to various patho-physiological conditions. In this study the sensor's ability to measure the stiffness of prostate tissue, from two excised prostatectomy specimens in vitro, was analysed. The specimens were also subjected to morphometric measurements, and the sensor parameter was compared with the morphology of the tissue with linear regression. In the probe impression interval 0.5-1.7 mm, the maximum R(2) > or = 0.60 (p < 0.05, n = 75). An increase in the proportion of prostate stones (corpora amylacea), stroma, or cancer in relation to healthy glandular tissue increased the measured stiffness. Cancer and stroma had the greatest effect on the measured stiffness. The deeper the sensor was pressed, the greater, i.e., deeper, volume it sensed. Tissue sections deeper in the tissue were assigned a lower mathematical weighting than sections closer to the sensor probe. It is concluded that cancer increases the measured stiffness as compared with healthy glandular tissue, but areas with predominantly stroma or many stones could be more difficult to differ from cancer.
Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards
Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J
2007-11-26
This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.
Patterns of Risk Using an Integrated Spatial Multi-Hazard Model (PRISM Model)
Multi-hazard risk assessment has long centered on small scale needs, whereby a single community or group of communities’ exposures are assessed to determine potential mitigation strategies. While this approach has advanced the understanding of hazard interactions, it is li...
NASA Astrophysics Data System (ADS)
Lu, X.; Gridin, S.; Williams, R. T.; Mayhugh, M. R.; Gektin, A.; Syntfeld-Kazuch, A.; Swiderski, L.; Moszynski, M.
2017-01-01
Relatively recent experiments on the scintillation response of CsI:Tl have found that there are three main decay times of about 730 ns, 3 μ s , and 16 μ s , i.e., one more principal decay component than had been previously reported; that the pulse shape depends on gamma-ray energy; and that the proportionality curves of each decay component are different, with the energy-dependent light yield of the 16 -μ s component appearing to be anticorrelated with that of the 0.73 -μ s component at room temperature. These observations can be explained by the described model of carrier transport and recombination in a particle track. This model takes into account processes of hot and thermalized carrier diffusion, electric-field transport, trapping, nonlinear quenching, and radiative recombination. With one parameter set, the model reproduces multiple observables of CsI:Tl scintillation response, including the pulse shape with rise and three decay components, its energy dependence, the approximate proportionality, and the main trends in proportionality of different decay components. The model offers insights on the spatial and temporal distributions of carriers and their reactions in the track.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
A probabilistic tornado wind hazard model for the continental United States
Hossain, Q; Kimball, J; Mensing, R; Savy, J
1999-04-19
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food
Mura, Ivan; Malakar, Pradeep K.; Walshaw, John; Peck, Michael W.; Barker, G. C.
2015-01-01
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. PMID:26350137
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.
Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C
2015-09-08
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. Copyright © 2015 Ihekwaba et al.
Deriving global flood hazard maps of fluvial floods through a physical model cascade
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Dutra, E.; Wetterhall, F.; Cloke, H.
2012-05-01
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979-2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Deriving global flood hazard maps of fluvial floods through a physical model cascade
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Dutra, E.; Wetterhall, F.; Cloke, H. L.
2012-11-01
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979-2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Deriving global flood hazard maps of fluvial floods through a physical model cascade
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Dutra, Emanuel; Wetterhall, Fredrik; Cloke, Hannah L.
2013-04-01
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979-2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
NASA Astrophysics Data System (ADS)
Chepurna, Tetiana B.; Kuzmenko, Eduard D.; Chepurnyj, Igor V.
2017-06-01
The article is devoted to the geological issue of the space-time regional prognostication of mudflow hazard. The methodology of space-time prediction of mudflows hazard by creating GIS predictive model has been developed. Using GIS technologies the relevant and representative complex of significant influence of spatial and temporal factors, adjusted to use in the regional prediction of mudflows hazard, were selected. Geological, geomorphological, technological, climatic, and landscape factors have been selected as spatial mudflow factors. Spatial analysis is based on detection of a regular connection of spatial factor characteristics with spatial distribution of the mudflow sites. The function of a standard complex spatial index (SCSI) of the probability of the mudflow sites distribution has been calculated. The temporal, long-term prediction of the mudflows activity was based on the hypothesis of the regular reiteration of natural processes. Heliophysical, seismic, meteorological, and hydrogeological factors have been selected as time mudflow factors. The function of a complex index of long standing mudflow activity (CIMA) has been calculated. The prognostic geoinformational model of mudflow hazard up to 2020 year, a year of the next peak of the mudflows activity, has been created. Mudflow risks have been counted and carogram of mudflow risk assessment within the limits of administrative-territorial units has been built for 2020 year.
Critical load analysis in hazard assessment of metals using a Unit World Model.
Gandhi, Nilima; Bhavsar, Satyendra P; Diamond, Miriam L
2011-09-01
A Unit World approach has been used extensively to rank chemicals for their hazards and to understand differences in chemical behavior. Whereas the fate and effects of an organic chemical in a Unit World Model (UWM) analysis vary systematically according to one variable (fraction of organic carbon), and the chemicals have a singular ranking regardless of environmental characteristics, metals can change their hazard ranking according to freshwater chemistry, notably pH and dissolved organic carbon (DOC). Consequently, developing a UWM approach for metals requires selecting a series of representative freshwater chemistries, based on an understanding of the sensitivity of model results to this chemistry. Here we analyze results from a UWM for metals with the goal of informing the selection of appropriate freshwater chemistries for a UWM. The UWM loosely couples the biotic ligand model (BLM) to a geochemical speciation model (Windermere Humic Adsorption Model [WHAM]) and then to the multi-species fate transport-speciation (Transpec) model. The UWM is applied to estimate the critical load (CL) of cationic metals Cd, Cu, Ni, Pb, and Zn, using three lake chemistries that vary in trophic status, pH, and other parameters. The model results indicated a difference of four orders of magnitude in particle-to-total dissolved partitioning (K(d)) that translated into minimal differences in fate because of the short water residence time used. However, a maximum 300-fold difference was calculated in Cu toxicity among the three chemistries and three aquatic organisms. Critical loads were lowest (greatest hazard) in the oligotrophic water chemistry and highest (least hazard) in the eutrophic water chemistry, despite the highest fraction of free metal ion as a function of total metal occurring in the mesotrophic system, where toxicity was ameliorated by competing cations. Water hardness, DOC, and pH had the greatest influence on CL, because of the influence of these factors on aquatic
Integrating fault and seismological data into a probabilistic seismic hazard model for Italy.
NASA Astrophysics Data System (ADS)
Valentini, Alessandro; Visini, Francesco; Pace, Bruno
2017-04-01
We present the results of new probabilistic seismic hazard analysis (PSHA) for Italy based on active fault and seismological data. Combining seismic hazard from active fault with distributed seismic sources (where there are no data on active faults) is the backbone of this work. Far away from identifying a best procedure, currently adopted approaches combine active faults and background sources applying a threshold magnitude, generally between 5.5 and 7, over which seismicity is modelled by faults, and under which is modelled by distributed sources or area sources. In our PSHA we (i) apply a new method for the treatment of geologic data of major active faults and (ii) propose a new approach to combine these data with historical seismicity to evaluate PSHA for Italy. Assuming that deformation is concentrated in correspondence of fault, we combine the earthquakes occurrences derived from the geometry and slip rates of the active faults with the earthquakes from the spatially smoothed earthquake sources. In the vicinity of an active fault, the smoothed seismic activity is gradually reduced by a fault-size driven factor. Even if the range and gross spatial distribution of expected accelerations obtained in our work are comparable to the ones obtained through methods applying seismic catalogues and classical zonation models, the main difference is in the detailed spatial pattern of our PSHA model: our model is characterized by spots of more hazardous area, in correspondence of mapped active faults, while the previous models give expected accelerations almost uniformly distributed in large regions. Finally, we investigate the impact due to the earthquake rates derived from two magnitude-frequency distribution (MFD) model for faults on the hazard result and in respect to the contribution of faults versus distributed seismic activity.
NASA Astrophysics Data System (ADS)
Komjathy, Attila; Yang, Yu-Ming; Meng, Xing; Verkhoglyadova, Olga; Mannucci, Anthony J.; Langley, Richard B.
2016-07-01
Natural hazards including earthquakes, volcanic eruptions, and tsunamis have been significant threats to humans throughout recorded history. Global navigation satellite systems (GNSS; including the Global Positioning System (GPS)) receivers have become primary sensors to measure signatures associated with natural hazards. These signatures typically include GPS-derived seismic deformation measurements, coseismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure, model, and monitor postseismic ionospheric disturbances caused by, e.g., earthquakes, volcanic eruptions, and tsunamis. In this paper, we review research progress at the Jet Propulsion Laboratory and elsewhere using examples of ground-based and spaceborne observation of natural hazards that generated TEC perturbations. We present results for state-of-the-art imaging using ground-based and spaceborne ionospheric measurements and coupled atmosphere-ionosphere modeling of ionospheric TEC perturbations. We also report advancements and chart future directions in modeling and inversion techniques to estimate tsunami wave heights and ground surface displacements using TEC measurements and error estimates. Our initial retrievals strongly suggest that both ground-based and spaceborne GPS remote sensing techniques could play a critical role in detection and imaging of the upper atmosphere signatures of natural hazards including earthquakes and tsunamis. We found that combining ground-based and spaceborne measurements may be crucial in estimating critical geophysical parameters such as tsunami wave heights and ground surface displacements using TEC observations. The GNSS-based remote sensing of natural-hazard-induced ionospheric disturbances could be applied to and used in operational tsunami and earthquake early warning systems.
Coincidence Proportional Counter
Manley, J H
1950-11-21
A coincidence proportional counter having a plurality of collecting electrodes so disposed as to measure the range or energy spectrum of an ionizing particle-emitting source such as an alpha source, is disclosed.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically
Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.
2014-01-01
Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a
NASA Astrophysics Data System (ADS)
Nagaoka, Tomoaki; Kunieda, Etsuo; Watanabe, Soichi
2008-12-01
The development of high-resolution anatomical voxel models of children is difficult given, inter alia, the ethical limitations on subjecting children to medical imaging. We instead used an existing voxel model of a Japanese adult and three-dimensional deformation to develop three voxel models that match the average body proportions of Japanese children at 3, 5 and 7 years old. The adult model was deformed to match the proportions of a child by using the measured dimensions of various body parts of children at 3, 5 and 7 years old and a free-form deformation technique. The three developed models represent average-size Japanese children of the respective ages. They consist of cubic voxels (2 mm on each side) and are segmented into 51 tissues and organs. We calculated the whole-body-averaged specific absorption rates (WBA-SARs) and tissue-averaged SARs for the child models for exposures to plane waves from 30 MHz to 3 GHz; these results were then compared with those for scaled down adult models. We also determined the incident electric-field strength required to produce the exposure equivalent to the ICNIRP basic restriction for general public exposure, i.e., a WBA-SAR of 0.08 W kg-1.
Testa, Marcia A; Pettigrew, Mary L; Savoia, Elena
2014-01-01
County and state health departments are increasingly conducting hazard vulnerability and jurisdictional risk (HVJR) assessments for public health emergency preparedness and mitigation planning and evaluation to improve the public health disaster response; however, integration and adoption of these assessments into practice are still relatively rare. While the quantitative methods associated with complex analytic and measurement methods, causal inference, and decision theory are common in public health research, they have not been widely used in public health preparedness and mitigation planning. To address this gap, the Harvard School of Public Health PERLC's goal was to develop measurement, geospatial, and mechanistic models to aid public health practitioners in understanding the complexity of HVJR assessment and to determine the feasibility of using these methods for dynamic and predictive HVJR analyses. We used systematic reviews, causal inference theory, structural equation modeling (SEM), and multivariate statistical methods to develop the conceptual and mechanistic HVJR models. Geospatial mapping was used to inform the hypothetical mechanistic model by visually examining the variability and patterns associated with county-level demographic, social, economic, hazards, and resource data. A simulation algorithm was developed for testing the feasibility of using SEM estimation. The conceptual model identified the predictive latent variables used in public health HVJR tools (hazard, vulnerability, and resilience), the outcomes (human, physical, and economic losses), and the corresponding measurement subcomponents. This model was translated into a hypothetical mechanistic model to explore and evaluate causal and measurement pathways. To test the feasibility of SEM estimation, the mechanistic model path diagram was translated into linear equations and solved simultaneously using simulated data representing 192 counties. Measurement, geospatial, and mechanistic
Data Model for Multi Hazard Risk Assessment Spatial Support Decision System
NASA Astrophysics Data System (ADS)
Andrejchenko, Vera; Bakker, Wim; van Westen, Cees
2014-05-01
The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
A probabilistic seismic hazard model based on cellular automata and information theory
NASA Astrophysics Data System (ADS)
Jiménez, A.; Posadas, A. M.; Marfil, J. M.
2005-03-01
We try to obtain a spatio-temporal model of earthquakes occurrence based on Information Theory and Cellular Automata (CA). The CA supply useful models for many investigations in natural sciences; here, it have been used to establish temporal relations between the seismic events occurring in neighbouring parts of the crust. The catalogue used is divided into time intervals and the region into cells, which are declared active or inactive by means of a certain energy release criterion (four criteria have been tested). A pattern of active and inactive cells which evolves over time is given. A stochastic CA is constructed with the patterns to simulate their spatio-temporal evolution. The interaction between the cells is represented by the neighbourhood (2-D and 3-D models have been tried). The best model is chosen by maximizing the mutual information between the past and the future states. Finally, a Probabilistic Seismic Hazard Map is drawn up for the different energy releases. The method has been applied to the Iberian Peninsula catalogue from 1970 to 2001. For 2-D, the best neighbourhood has been the Moore's one of radius 1; the von Neumann's 3-D also gives hazard maps and takes into account the depth of the events. Gutenberg-Richter's law and Hurst's analysis have been obtained for the data as a test of the catalogue. Our results are consistent with previous studies both of seismic hazard and stress conditions in the zone, and with the seismicity occurred after 2001.
Estimation of hazard potentials from ice avalanches using remote sensing and GIS-modelling
NASA Astrophysics Data System (ADS)
Salzmann, N.; Kaeaeb, A.; Huggel, C.; Allgoewer, B.; Haeberli, W.
2003-04-01
Ice avalanches occur when a large mass of ice breaks off from steep glaciers. Due to the usually low reach of ice avalanches, their hazard potentials are generally restricted to high mountain areas with dense population or under frequent touristic use. But in consequence of climatic changes and intensified land use, the related hazard potentials are presently increasing. Therefore, dealing with ice-avalanche hazards requires a robust tool for systematic area-wide detection of potential ice avalanches, that has not been developed so far. To close this gap, a three-step downscaling approach was developed. The method-chain is based on statistical parameters and techniques of GIS-modelling and remote sensing. The procedure allows a fast and systematic first-order mapping of potentially dangerous steep glaciers and their runout paths for an entire region (focus map scale 1:10'000 - 1:25'000). The validation of the approach was carried out in the Bernese Alps, Switzerland. The results agree well with other specific hazard mapping realised in the same area by other authors. Furthermore, improvements can be obtained by expanding the method-chain and with more accurate base data (especially satellite data) available in the near future.
NASA Astrophysics Data System (ADS)
Xiong, Liyang; Shi, Wenjia; Tang, Chao
2016-08-01
Adaptation is a ubiquitous feature in biological sensory and signaling networks. It has been suggested that adaptive systems may follow certain simple design principles across diverse organisms, cells and pathways. One class of networks that can achieve adaptation utilizes an incoherent feedforward control, in which two parallel signaling branches exert opposite but proportional effects on the output at steady state. In this paper, we generalize this adaptation mechanism by establishing a steady-state proportionality relationship among a subset of nodes in a network. Adaptation can be achieved by using any two nodes in the sub-network to respectively regulate the output node positively and negatively. We focus on enzyme networks and first identify basic regulation motifs consisting of two and three nodes that can be used to build small networks with proportional relationships. Larger proportional networks can then be constructed modularly similar to LEGOs. Our method provides a general framework to construct and analyze a class of proportional and/or adaptation networks with arbitrary size, flexibility and versatile functional features.
Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges
2016-04-01
The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model
Hofman, Abe D; Visser, Ingmar; Jansen, Brenda R J; van der Maas, Han L J
2015-01-01
We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
Goode, Colleen J; Preheim, Gayle J; Bonini, Susan; Case, Nancy K; VanderMeer, Jennifer; Iannelli, Gina
2016-01-01
This manuscript describes a collaborative, seamless program between a community college and a university college of nursing designed to increase the number of nurses prepared with a baccalaureate degree. The three-year Integrated Nursing Pathway provides community college students with a non-nursing associate degree, early introduction to nursing, and seamless progression through BSN education. The model includes dual admission and advising and is driven by the need for collaboration with community colleges, the need to increase the percentage of racial-ethnic minority students, the shortage of faculty, and employer preferences for BSN graduates.
Forecasting Marine Corps Enlisted Attrition Through Parametric Modeling
2009-03-01
OF PAGES 85 14. SUBJECT TERMS Forecasting, Attrition, Marine Corps NEAS losses, Gompertz Model, Survival Analysis 16. PRICE CODE 17. SECURITY...18 1. Parametric Proportional Hazards Models ......................................18 2. Gompertz Models...19 a. Gompertz Hazard Function....................................................19 b. Gompertz Cumulative
Using the Averaging-Based Factorization to Assess CyberShake Hazard Models
NASA Astrophysics Data System (ADS)
Wang, F.; Jordan, T. H.; Callaghan, S.; Graves, R. W.; Olsen, K. B.; Maechling, P. J.
2013-12-01
The CyberShake project of Southern California Earthquake Center (SCEC) combines stochastic models of finite-fault ruptures with 3D ground motion simulations to compute seismic hazards at low frequencies (< 0.5 Hz) in Southern California. The first CyberShake hazard model (Graves et al., 2011) was based on the Graves & Pitarka (2004) rupture model (GP-04) and the Kohler et al. (2004) community velocity model (CVM-S). We have recently extended the CyberShake calculations to include the Graves & Pitarka (2010) rupture model (GP-10), which substantially increases the rupture complexity relative to GP-04, and the Shaw et al. (2011) community velocity model (CVM-H), which features different sedimentary basin structures than CVM-S. Here we apply the averaging-based factorization (ABF) technique of Wang & Jordan (2013) to compare CyberShake models and assess their consistency with the hazards predicted by the Next Generation Attenuation (NGA) models (Power et al., 2008). ABF uses a hierarchical averaging scheme to separate the shaking intensities for large ensembles of earthquakes into relative (dimensionless) excitation fields representing site, path, directivity, and source-complexity effects, and it provides quantitative, map-based comparisons between models with completely different formulations. The CyberShake directivity effects are generally larger than predicted by the Spudich & Chiou (2008) NGA directivity factor, but those calculated from the GP-10 sources are smaller than those of GP-04, owing to the greater incoherence of the wavefields from the more complex rupture models. Substituting GP-10 for GP-04 reduces the CyberShake-NGA directivity-effect discrepancy by a factor of two, from +36% to +18%. The CyberShake basin effects are generally larger than those from the three NGA models that provide basin-effect factors. However, the basin excitations calculated from CVM-H are smaller than from CVM-S, and they show a stronger frequency dependence, primarily because
Proportional Borda allocations.
Darmann, Andreas; Klamler, Christian
2016-01-01
In this paper we study the allocation of indivisible items among a group of agents, a problem which has received increased attention in recent years, especially in areas such as computer science and economics. A major fairness property in the fair division literature is proportionality, which is satisfied whenever each of the n agents receives at least [Formula: see text] of the value attached to the whole set of items. To simplify the determination of values of (sets of) items from ordinal rankings of the items, we use the Borda rule, a concept used extensively and well-known in voting theory. Although, in general, proportionality cannot be guaranteed, we show that, under certain assumptions, proportional allocations of indivisible items are possible and finding such allocations is computationally easy.
A new approach for deriving Flood hazard maps from SAR data and global hydrodynamic models
NASA Astrophysics Data System (ADS)
Matgen, P.; Hostache, R.; Chini, M.; Giustarini, L.; Pappenberger, F.; Bally, P.
2014-12-01
With the flood consequences likely to amplify because of the growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method that integrates global flood inundation modeling and microwave remote sensing is presented. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers opportunities for estimating flood non-exceedance probabilities in a robust way. These probabilities can be attributed to historical satellite observations. Time series of SAR-derived flood extent maps and associated non-exceedance probabilities can then be combined generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. In principle, this can be done for any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. As a test case we applied the method on the Severn River (UK) and the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. The first results confirm the potentiality of the method. However, further developments on two aspects are required to improve the quality of the hazard map and to ensure the acceptability of the product by potential end user organizations. On the one hand, it is of paramount importance to
Multiwire proportional counter development
NASA Technical Reports Server (NTRS)
1972-01-01
A test run was made at the Bevatron to check the character of signals induced in an electromagnetic delay line capacitively coupled to the wire cathode plane of a multiwire proportional chamber. In particular, these signals and the behavior of the readout electronics as a function of associated delta-ray production are studied. These measurements are important in assessing the effect of delta-ray background on the spatial resolution attainable for the primary ions. Test results are used to design a multiwire proportional chamber and readout system for use as spatial detectors in a superconducting magnetic spectrometer experiment.
Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection
NASA Technical Reports Server (NTRS)
Messing, B. S.
1980-01-01
A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.
A seismic source zone model for the seismic hazard assessment of Slovakia
NASA Astrophysics Data System (ADS)
Hók, Jozef; Kysel, Robert; Kováč, Michal; Moczo, Peter; Kristek, Jozef; Kristeková, Miriam; Šujan, Martin
2016-06-01
We present a new seismic source zone model for the seismic hazard assessment of Slovakia based on a new seismotectonic model of the territory of Slovakia and adjacent areas. The seismotectonic model has been developed using a new Slovak earthquake catalogue (SLOVEC 2011), successive division of the large-scale geological structures into tectonic regions, seismogeological domains and seismogenic structures. The main criteria for definitions of regions, domains and structures are the age of the last tectonic consolidation of geological structures, thickness of lithosphere, thickness of crust, geothermal conditions, current tectonic regime and seismic activity. The seismic source zones are presented on a 1:1,000,000 scale map.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.
2015-12-01
Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.
ERIC Educational Resources Information Center
Snider, Richard G.
1985-01-01
The ratio factors approach involves recognizing a given fraction, then multiplying so that units cancel. This approach, which is grounded in concrete operational thinking patterns, provides a standard for science ratio and proportion problems. Examples are included for unit conversions, mole problems, molarity, speed/density problems, and…
Selecting Proportional Reasoning Tasks
ERIC Educational Resources Information Center
de la Cruz, Jessica A.
2013-01-01
With careful consideration given to task selection, students can construct their own solution strategies to solve complex proportional reasoning tasks while the teacher's instructional goals are still met. Several aspects of the tasks should be considered including their numerical structure, context, difficulty level, and the strategies they are…
ERIC Educational Resources Information Center
Markworth, Kimberly A.
2012-01-01
Students may be able to set up a relevant proportion and solve through cross multiplication. However, this ability may not reflect the desired mathematical understanding of the covarying relationship that exists between two variables or the equivalent relationship that exists between two ratios. Students who lack this understanding are likely to…
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Global Hydrological Hazard Evaluation System (Global BTOP) Using Distributed Hydrological Model
NASA Astrophysics Data System (ADS)
Gusyev, M.; Magome, J.; Hasegawa, A.; Takeuchi, K.
2015-12-01
A global hydrological hazard evaluation system based on the BTOP models (Global BTOP) is introduced and quantifies flood and drought hazards with simulated river discharges globally for historical, near real-time monitoring and climate change impact studies. The BTOP model utilizes a modified topographic index concept and simulates rainfall-runoff processes including snowmelt, overland flow, soil moisture in the root and unsaturated zones, sub-surface flow, and river flow routing. The current global BTOP is constructed from global data on 10-min grid and is available to conduct river basin analysis on local, regional, and global scale. To reduce the impact of a coarse resolution, topographical features of global BTOP were obtained using river network upscaling algorithm that preserves fine resolution characteristics of 3-arcsec HydroSHEDS and 30-arcsec Hydro1K datasets. In addition, GLCC-IGBP land cover (USGS) and the DSMW(FAO) were used for the root zone depth and soil properties, respectively. The long-term seasonal potential evapotranspiration within BTOP model was estimated by the Shuttleworth-Wallace model using climate forcing data CRU TS3.1 and a GIMMS-NDVI(UMD/GLCF). The global BTOP was run with globally available precipitation such APHRODITE dataset and showed a good statistical performance compared to the global and local river discharge data in the major river basins. From these simulated daily river discharges at each grid, the flood peak discharges of selected return periods were obtained using the Gumbel distribution with L-moments and the hydrological drought hazard was quantified using standardized runoff index (SRI). For the dynamic (near real-time) applications, the global BTOP model is run with GSMaP-NRT global precipitation and simulated daily river discharges are utilized in a prototype near-real time discharge simulation system (GFAS-Streamflow), which is used to issue flood peak discharge alerts globally. The global BTOP system and GFAS
NASA Astrophysics Data System (ADS)
Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark
2015-12-01
Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
Assessing rainfall triggered landslide hazards through physically based models under uncertainty
NASA Astrophysics Data System (ADS)
Balin, D.; Metzger, R.; Fallot, J. M.; Reynard, E.
2009-04-01
Hazard and risk assessment require, besides good data, good simulation capabilities to allow prediction of events and their consequences. The present study introduces a landslide hazards assessment strategy based on the coupling of hydrological physically based models with slope stability models that should be able to cope with uncertainty of input data and model parameters. The hydrological model used is based on the Water balance Simulation Model, WASIM-ETH (Schulla et al., 1997), a fully distributed hydrological model that has been successfully used previously in the alpine regions to simulate runoff, snowmelt, glacier melt, and soil erosion and impact of climate change on these. The study region is the Vallon de Nant catchment (10km2) in the Swiss Alps. A sound sensitivity analysis will be conducted in order to choose the discretization threshold derived from a Laser DEM model, to which the hydrological model yields the best compromise between performance and time computation. The hydrological model will be further coupled with slope stability methods (that use the topographic index and the soil moisture such as derived from the hydrological model) to simulate the spatial distribution of the initiation areas of different geomorphic processes such as debris flows and rainfall triggered landslides. To calibrate the WASIM-ETH model, the Monte Carlo Markov Chain Bayesian approach is privileged (Balin, 2004, Schaefli et al., 2006). The model is used in a single and a multi-objective frame to simulate discharge and soil moisture with uncertainty at representative locations. This information is further used to assess the potential initial areas for rainfall triggered landslides and to study the impact of uncertain input data, model parameters and simulated responses (discharge and soil moisture) on the modelling of geomorphological processes.
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Nicknam, Ahmad; Dadras, Ehsan Yousefi; Eftekhari, Seyed Nasrollah
2017-01-01
Ground motions are affected by directivity effects at near-fault regions which result in low-frequency cycle pulses at the beginning of the velocity time history. The directivity features of near-fault ground motions can lead to significant increase in the risk of earthquake-induced damage on engineering structures. The ordinary probabilistic seismic hazard analysis (PSHA) does not take into account such effects; recent studies have thus proposed new frameworks to incorporate directivity effects in PSHA. The objective of this study is to develop the seismic hazard mapping of Tehran City according to near-fault PSHA procedure for different return periods. To this end, the directivity models required in the modified PSHA were developed based on a database of the simulated ground motions. The simulated database was used in this study because there are no recorded near-fault data in the region to derive purely empirically based pulse prediction models. The results show that the directivity effects can significantly affect the estimate of regional seismic hazard.
NASA Astrophysics Data System (ADS)
Wu, Qing
Millions of people across the world are suffering from noise induced hearing loss (NIHL), especially under working conditions of either continuous Gaussian or non-Gaussian noise that might affect human's hearing function. Impulse noise is a typical non-Gaussian noise exposure in military and industry, and generates severe hearing loss problem. This study mainly focuses on characterization of impulse noise using digital signal analysis method and prediction of the auditory hazard of impulse noise induced hearing loss by the Auditory Hazard Assessment Algorithm for Humans (AHAAH) modeling. A digital noise exposure system has been developed to produce impulse noises with peak sound pressure level (SPL) up to 160 dB. The characterization of impulse noise generated by the system has been investigated and analyzed in both time and frequency domains. Furthermore, the effects of key parameters of impulse noise on auditory risk unit (ARU) are investigated using both simulated and experimental measured impulse noise signals in the AHAAH model. The results showed that the ARUs increased monotonically with the peak pressure (both P+ and P-) increasing. With increasing of the time duration, the ARUs increased first and then decreased, and the peak of ARUs appeared at about t = 0.2 ms (for both t+ and t-). In addition, the auditory hazard of experimental measured impulse noises signals demonstrated a monotonically increasing relationship between ARUs and system voltages.
Beyond Flood Hazard Maps: Detailed Flood Characterization with Remote Sensing, GIS and 2d Modelling
NASA Astrophysics Data System (ADS)
Santillan, J. R.; Marqueso, J. T.; Makinano-Santillan, M.; Serviano, J. L.
2016-09-01
Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS) and Geographic Information System (GIS) are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D) flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers) that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.
A model standardized risk assessment protocol for use with hazardous waste sites.
Marsh, G M; Day, R
1991-01-01
This paper presents a model standardized risk assessment protocol (SRAP) for use with hazardous waste sites. The proposed SRAP focuses on the degree and patterns of evidence that exist for a significant risk to human populations from exposure to a hazardous waste site. The SRAP was designed with at least four specific goals in mind: to organize the available scientific data on a specific site and to highlight important gaps in this knowledge; to facilitate rational, cost-effective decision making about the best distribution of available manpower and resources; to systematically classify sites roughly according to the level of risk they pose to surrounding human populations; and to promote an improved level of communication among professionals working in the area of waste site management and between decision makers and the local population. PMID:2050062
Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise
NASA Astrophysics Data System (ADS)
Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.
2014-12-01
A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.
Schlüter, Daniela K; Ndeffo-Mbah, Martial L; Takougang, Innocent; Ukety, Tony; Wanji, Samuel; Galvani, Alison P; Diggle, Peter J
2016-12-01
Lymphatic Filariasis and Onchocerciasis (river blindness) constitute pressing public health issues in tropical regions. Global elimination programs, involving mass drug administration (MDA), have been launched by the World Health Organisation. Although the drugs used are generally well tolerated, individuals who are highly co-infected with Loa loa are at risk of experiencing serious adverse events. Highly infected individuals are more likely to be found in communities with high prevalence. An understanding of the relationship between individual infection and population-level prevalence can therefore inform decisions on whether MDA can be safely administered in an endemic community. Based on Loa loa infection intensity data from individuals in Cameroon, the Republic of the Congo and the Democratic Republic of the Congo we develop a statistical model for the distribution of infection levels in communities. We then use this model to make predictive inferences regarding the proportion of individuals whose parasite count exceeds policy-relevant levels. In particular we show how to exploit the positive correlation between community-level prevalence and intensity of infection in order to predict the proportion of highly infected individuals in a community given only prevalence data from the community in question. The resulting prediction intervals are not substantially wider, and in some cases narrower, than the corresponding binomial confidence intervals obtained from data that include measurements of individual infection levels. Therefore the model developed here facilitates the estimation of the proportion of individuals highly infected with Loa loa using only estimated community level prevalence. It can be used to assess the risk of rolling out MDA in a specific community, or to guide policy decisions.
Ndeffo-Mbah, Martial L; Takougang, Innocent; Ukety, Tony; Wanji, Samuel; Galvani, Alison P; Diggle, Peter J
2016-01-01
Lymphatic Filariasis and Onchocerciasis (river blindness) constitute pressing public health issues in tropical regions. Global elimination programs, involving mass drug administration (MDA), have been launched by the World Health Organisation. Although the drugs used are generally well tolerated, individuals who are highly co-infected with Loa loa are at risk of experiencing serious adverse events. Highly infected individuals are more likely to be found in communities with high prevalence. An understanding of the relationship between individual infection and population-level prevalence can therefore inform decisions on whether MDA can be safely administered in an endemic community. Based on Loa loa infection intensity data from individuals in Cameroon, the Republic of the Congo and the Democratic Republic of the Congo we develop a statistical model for the distribution of infection levels in communities. We then use this model to make predictive inferences regarding the proportion of individuals whose parasite count exceeds policy-relevant levels. In particular we show how to exploit the positive correlation between community-level prevalence and intensity of infection in order to predict the proportion of highly infected individuals in a community given only prevalence data from the community in question. The resulting prediction intervals are not substantially wider, and in some cases narrower, than the corresponding binomial confidence intervals obtained from data that include measurements of individual infection levels. Therefore the model developed here facilitates the estimation of the proportion of individuals highly infected with Loa loa using only estimated community level prevalence. It can be used to assess the risk of rolling out MDA in a specific community, or to guide policy decisions. PMID:27906982
Cognitive and Metacognitive Aspects of Proportional Reasoning
ERIC Educational Resources Information Center
Modestou, Modestina; Gagatsis, Athanasios
2010-01-01
In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…
Cognitive and Metacognitive Aspects of Proportional Reasoning
ERIC Educational Resources Information Center
Modestou, Modestina; Gagatsis, Athanasios
2010-01-01
In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…
Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models
NASA Astrophysics Data System (ADS)
Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria
2011-08-01
The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.
Garcia, Erika; Hurley, Susan; Nelson, David O; Gunier, Robert B; Hertz, Andrew; Reynolds, Peggy
2014-08-01
Elevated breast cancer incidence rates in urban areas have led to speculation regarding the potential role of air pollution. In order to inform the exposure assessment for a subsequent breast cancer study, we evaluated agreement between modeled and monitored hazardous air pollutants (HAPs). Modeled annual ambient concentrations of HAPs in California came from the US Environmental Protection Agency's National Air Toxics Assessment database for 1996, 1999, 2002, and 2005 and corresponding monitored data from the California Air Resources Board's air quality monitoring program. We selected 12 compounds of interest for our study and focused on evaluating agreement between modeled and monitored data, and of temporal trends. Modeled data generally underestimated the monitored data, especially in 1996. For most compounds agreement between modeled and monitored concentrations improved over time. We concluded that 2002 and 2005 modeled data agree best with monitored data and are the most appropriate years for direct use in our subsequent epidemiologic analysis.
Modelling hazardous surface hoar layers in the mountain snowpack over space and time
NASA Astrophysics Data System (ADS)
Horton, Simon Earl
Surface hoar layers are a common failure layer in hazardous snow slab avalanches. Surface hoar crystals (frost) initially form on the surface of the snow, and once buried can remain a persistent weak layer for weeks or months. Avalanche forecasters have difficulty tracking the spatial distribution and mechanical properties of these layers in mountainous terrain. This thesis presents numerical models and remote sensing methods to track the distribution and properties of surface hoar layers over space and time. The formation of surface hoar was modelled with meteorological data by calculating the downward flux of water vapour from the atmospheric boundary layer. The timing of surface hoar formation and the modelled crystal size was verified at snow study sites throughout western Canada. The major surface hoar layers over several winters were predicted with fair success. Surface hoar formation was modelled over various spatial scales using meteorological data from weather forecast models. The largest surface hoar crystals formed in regions and elevation bands with clear skies, warm and humid air, cold snow surfaces, and light winds. Field surveys measured similar regional-scale patterns in surface hoar distribution. Surface hoar formation patterns on different slope aspects were observed, but were not modelled reliably. Mechanical field tests on buried surface hoar layers found layers increased in shear strength over time, but had persistent high propensity for fracture propagation. Layers with large crystals and layers overlying hard melt-freeze crusts showed greater signs of instability. Buried surface hoar layers were simulated with the snow cover model SNOWPACK and verified with avalanche observations, finding most hazardous surface hoar layers were identified with a structural stability index. Finally, the optical properties of surface hoar crystals were measured in the field with spectral instruments. Large plate-shaped crystals were less reflective at shortwave
Nowcast model for hazardous material spill prevention and response, San Francisco Bay, California
Cheng, Ralph T.; Wilmot, Wayne L.; Galt, Jerry A.
1997-01-01
The National Oceanic and Atmospheric Administration (NOAA) installed the Physical Oceanographic Real-time System (PORTS) in San Francisco Bay, California, to provide real-time observations of tides, tidal currents, and meteorological conditions to, among other purposes, guide hazardous material spill prevention and response. Integrated with nowcast modeling techniques and dissemination of real-time data and the nowcasting results through the Internet on the World Wide Web, emerging technologies used in PORTS for real-time data collection forms a nowcast modeling system. Users can download tides and tidal current distribution in San Francisco Bay for their specific applications and/or for further analysis.
NASA Technical Reports Server (NTRS)
Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.
1973-01-01
The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.
1994-07-01
This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.
Multiwire proportional chamber development
NASA Technical Reports Server (NTRS)
Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.
1973-01-01
The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.
NASA Astrophysics Data System (ADS)
Aronica, Giuseppe T.; Cascone, Ernesto; Randazzo, Giovanni; Biondi, Giovanni; Lanza, Stefania; Fraccarollo, Luigi; Brigandi, Giuseppina
2010-05-01
Catastrophic events periodically occur in the area of Messina (Sicily, Italy). Both in October 2007 and in October 2009 debris/mud flows triggered by heavy rainfall affected various towns and villages located along the jonian coast of the town, highlighting the destructive potential of these events. The two events gave rise to severe property damage and in the latter more than 40 people were killed. Objective of this study is to present an integrated modelling approach based on three different models, namely an hydrological model, a slope stability model and an hydraulic model, to identify potential debris flow hazard areas. A continuous semi-distributed form of the well known hydrological model IHACRES has been used to derive soil moisture conditions by simulating the infiltration process in Hortonian form. As matter of fact, the soil is conceptually schematized with a catchment storage parameter which represents catchment wetness/soil moisture. The slope stability model allows identifying potential debris-flow sources and is based on the model SHALSTAB that permits to detect those parts of the catchment whose stability conditions are strongly affected by pore water pressure build-up due to local rainfall and soil conductivity and those parts of the basin that, conversely, are unconditionally stable under static loading conditions. Assuming that the solids and the interstitial fluid move downstream with the same velocity, debris flow propagation is described using a two dimensional depth averaged model. Based on extensive sediment sampling and morphological observations, the rheological characterization of the flowing mixture, along with erosion/deposition mechanisms, will be carefully considered in the model. Differential equations are integrated with an implicit Galerkin finite element scheme or, alternatively, finite volume methods. To illustrate this approach, the proposed methodology is applied to a debris flow occurred in the Mastroguglielmo catchment in
Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function
Zheng, Wenjing; Petersen, Maya; van der Laan, Mark
2016-01-01
In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i.e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation (IPTW, van der Laan and Petersen (2007), Robins et al. (2000b), Robins (1999), Robins et al. (2008)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, hence offers bias reduction and efficiency gain over the incumbent IPTW estimator. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a non-targeted substitution estimator in a simulation study. PMID:27227723
Earthquake catalogs for the 2017 Central and Eastern U.S. short-term seismic hazard model
Mueller, Charles S.
2017-01-01
The U. S. Geological Survey (USGS) makes long-term seismic hazard forecasts that are used in building codes. The hazard models usually consider only natural seismicity; non-tectonic (man-made) earthquakes are excluded because they are transitory or too small. In the past decade, however, thousands of earthquakes related to underground fluid injection have occurred in the central and eastern U.S. (CEUS), and some have caused damage. In response, the USGS is now also making short-term forecasts that account for the hazard from these induced earthquakes. Seismicity statistics are analyzed to develop recurrence models, accounting for catalog completeness. In the USGS hazard modeling methodology, earthquakes are counted on a map grid, recurrence models are applied to estimate the rates of future earthquakes in each grid cell, and these rates are combined with maximum-magnitude models and ground-motion models to compute the hazard The USGS published a forecast for the years 2016 and 2017.Here, we document the development of the seismicity catalogs for the 2017 CEUS short-term hazard model. A uniform earthquake catalog is assembled by combining and winnowing pre-existing source catalogs. The initial, final, and supporting earthquake catalogs are made available here.
Large Scale Debris-flow Hazard Assessment : A Geotechnical Approach and Gis Modelling
NASA Astrophysics Data System (ADS)
Delmonaco, G.; Leoni, G.; Margottini, C.; Puglisi, C.; Spizzichino, D.
A deterministic approach has been developed for large-scale landslide hazard analysis carried out by ENEA, the Italian Agency for New Technologies, Energy and Environ- ment, in the framework of TEMRAP- The European Multi-Hazard Risk Assessment Project, finalised to the application of methodologies to incorporate the reduction of natural disasters. The territory of Versilia, and in particular the basin of Vezza river (60 Km2), has been chosen as test area of the project. The Vezza river basin, was affected by over 250 shallow landslides (debris/earth flow) mainly involving the metamorphic geological formations outcropping in the area triggered by the hydro-meteorological event of 19th June 1996. Many approaches and methodologies have been proposed in the scientific literature aimed at assessing landslide hazard and risk, depending es- sentially on scope of work, availability of data and scale of representation. In the last decades landslide hazard and risk analyses have been favoured by the development of GIS techniques that have permitted to generalise, synthesise and model the stability conditions at large scale (>1:10.000) investigation. In this work, the main results de- rived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. The deterministic analysis has been developed through the following steps: 1) elaboration of a landslide inventory map through aerial photo interpretation and direct field survey; 2) genera- tion of a data-base and digital maps; 3) elaboration of a DTM and slope angle map; 4) definition of a superficial soil thickness map; 5) litho-technical soil characterisation, through implementation of a back-analysis on test slopes and laboratory test analy- sis; 6) inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation; 7) implementation of a slope stability model (in- finite slope model) and
Francq, Bernard G; Cartiaux, Olivier
2016-09-10
Resecting bone tumors requires good cutting accuracy to reduce the occurrence of local recurrence. This issue is considerably reduced with a navigated technology. The estimation of extreme proportions is challenging especially with small or moderate sample sizes. When no success is observed, the commonly used binomial proportion confidence interval is not suitable while the rule of three provides a simple solution. Unfortunately, these approaches are unable to differentiate between different unobserved events. Different delta methods and bootstrap procedures are compared in univariate and linear mixed models with simulations and real data by assuming the normality. The delta method on the z-score and parametric bootstrap provide similar results but the delta method requires the estimation of the covariance matrix of the estimates. In mixed models, the observed Fisher information matrix with unbounded variance components should be preferred. The parametric bootstrap, easier to apply, outperforms the delta method for larger sample sizes but it may be time costly. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.
Modelling of a spread of hazardous substances in a Floreon+ system
NASA Astrophysics Data System (ADS)
Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David
2017-07-01
This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.
Web-based Services for Earth Observing and Model Data in National Applications and Hazards
NASA Astrophysics Data System (ADS)
Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.
2005-12-01
The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Sokolov, V. Y.
2013-12-01
Ground shaking due to recent catastrophic earthquakes are estimated to be significantly higher than that predicted by a probabilistic seismic hazard analysis (PSHA). A reason is that extreme (large magnitude and rare) seismic events are not accounted in PSHA in the most cases due to the lack of information and unknown reoccurrence time of the extremes. We present a new approach to assessment of regional seismic hazard, which incorporates observed (recorded and historic) seismicity and modeled extreme events. We apply this approach to PSHA of the Tibet-Himalayan region. The large magnitude events simulated for several thousand years in models of lithospheric block-and-fault dynamics and consistent with the regional geophysical and geodetic data are employed together with the observed earthquakes for the Monte-Carlo PSHA. Earthquake scenarios are generated stochastically to sample the magnitude and spatial distribution of seismicity (observed and modeled) as well as the distribution of ground motion for each seismic event. The peak ground acceleration (PGA) values (that is, ground shaking at a site), which are expected to be exceeded at least once in 50 years with a probability of 10%, are mapped and compared to those PGA values observed and predicted earlier. The results show that the PGA values predicted by our assessment fit much better the observed ground shaking due to the 2008 Wenchuan earthquake than those predicted by conventional PSHA. Our approach to seismic hazard assessment provides a better understanding of ground shaking due to possible large-magnitude events and could be useful for risk assessment, earthquake engineering purposes, and emergency planning.
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Brune, J.; Brune, R.; Ludwig, L. Grant; Kendrick, K.; Purvance, M.; Saleeby, I.
2012-04-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use this method to calculate new exposure ages and fragilities for 6 PBRs in southern California (USA) near the San Andreas, San Jacinto, and Elsinore faults at the Lovejoy Buttes, Round Top, Pacifico, Beaumont South, Perris, and Benton Road sites (in addition to the recently published age of 18.7 +/- 2.8 ka for a PBR at the Grass Valley site). We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data.
Testing seismic hazard models with Be-10 exposure ages for precariously balanced rocks
NASA Astrophysics Data System (ADS)
Rood, D. H.; Anooshehpoor, R.; Balco, G.; Biasi, G. P.; Brune, J. N.; Brune, R.; Grant Ludwig, L.; Kendrick, K. J.; Purvance, M.; Saleeby, I.
2012-12-01
Currently, the only empirical tool available to test maximum earthquake ground motions spanning timescales of 10 ky-1 My is the use of fragile geologic features, including precariously balanced rocks (PBRs). The ages of PBRs together with their areal distribution and mechanical stability ("fragility") constrain probabilistic seismic hazard analysis (PSHA) over long timescales; pertinent applications include the USGS National Seismic Hazard Maps (NSHM) and tests for ground motion models (e.g., Cybershake). Until recently, age constraints for PBRs were limited to varnish microlamination (VML) dating techniques and sparse cosmogenic nuclide data; however, VML methods yield minimum limiting ages for individual rock surfaces, and the interpretations of cosmogenic nuclide data were ambiguous because they did not account for the exhumation history of the PBRs or the complex shielding of cosmic rays. We have recently published a robust method for the exposure dating of PBRs combining Be-10 profiles, a numerical model, and a three-dimensional shape model for each PBR constructed using photogrammetry (Balco et al., 2011, Quaternary Geochronology). Here, we use our published method to calculate new exposure ages for PBRs at 6 sites in southern California near the San Andreas, San Jacinto, and Elsinore faults, including: Lovejoy Buttes (9 +/- 1 ka), Round Top (35 +/- 1 ka), Pacifico (19 +/- 1 ka, but with a poor fit to data), Beaumont South (17 +/- 2 ka), Perris (24 +/- 2 ka), and Benton Road (40 +/- 1 ka), in addition to the recently published age of 18.5 +/- 2.0 ka for a PBR at the Grass Valley site. We combine our ages and fragilities for each PBR, and use these data to test the USGS 2008 NSHM PGA with 2% in 50 year probability, USGS 2008 PSHA deaggregations, and basic hazard curves from USGS 2002 NSHM data. Precariously balanced rock in southern California
Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data
TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal
2016-01-01
Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
NASA Astrophysics Data System (ADS)
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
Proportional counter radiation camera
Borkowski, C.J.; Kopp, M.K.
1974-01-15
A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)
Marginal regression approach for additive hazards models with clustered current status data.
Su, Pei-Fang; Chi, Yunchan
2014-01-15
Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated.
Atmospheric Electrical Modeling in Support of the NASA F-106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.
1988-01-01
A recently developed storm electrification model (SEM) is used to investigate the operating environment of the F-106 airplane during the NASA Storm Hazards Project. The model is 2-D, time dependent and uses a bulkwater microphysical parameterization scheme. Electric charges and fields are included, and the model is fully coupled dynamically, microphysically and electrically. One flight showed that a high electric field was developed at the aircraft's operating altitude (28 kft) and that a strong electric field would also be found below 20 kft; however, this low-altitude, high-field region was associated with the presence of small hail, posing a hazard to the aircraft. An operational procedure to increase the frequency of low-altitude lightning strikes was suggested. To further the understanding of lightning within the cloud environment, a parameterization of the lightning process was included in the SEM. It accounted for the initiation, propagation, termination, and charge redistribution associated with an intracloud discharge. Finally, a randomized lightning propagation scheme was developed, and the effects of cloud particles on the initiation of lightning investigated.
Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report
Holcomb, C.; Watts, L.; Outcalt, S.L.; Louie, B.; Mullins, M.E.; Rogers, T.N.
1998-06-01
'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
Schunior, A.; Zengel, A.E.; Mullenix, P.J.; Tarbell, N.J.; Howes, A.; Tassinari, M.S. )
1990-10-15
Many long term survivors of childhood acute lymphoblastic leukemia have short stature, as well as craniofacial and dental abnormalities, as side effects of central nervous system prophylactic therapy. An animal model is presented to assess these adverse effects on growth. Cranial irradiation (1000 cGy) with and without prednisolone (18 mg/kg i.p.) and methotrexate (2 mg/kg i.p.) was administered to 17- and 18-day-old Sprague-Dawley male and female rats. Animals were weighed 3 times/week. Final body weight and body length were measured at 150 days of age. Femur length and craniofacial dimensions were measured directly from the bones, using calipers. For all exposed groups there was a permanent suppression of weight gain with no catch-up growth or normal adolescent growth spurt. Body length was reduced for all treated groups, as were the ratios of body weight to body length and cranial length to body length. Animals subjected to cranial irradiation exhibited microcephaly, whereas those who received a combination of radiation and chemotherapy demonstrated altered craniofacial proportions in addition to microcephaly. Changes in growth patterns and skeletal proportions exhibited sexually dimorphic characteristics. The results indicate that cranial irradiation is a major factor in the growth failure in exposed rats, but chemotherapeutic agents contribute significantly to the outcome of growth and craniofacial dimensions.
The microstrip proportional counter
NASA Technical Reports Server (NTRS)
Ramsey, B. D.
1992-01-01
Microstrip detectors in which the usual discrete anode and cathode wires are replaced by conducting strips on an insulating or partially insulating substrate are fabricated using integrated circuit-type photolithographic techniques and hence offer very high spatial accuracy and uniformity, together with the capability of producing extremely fine electrode structures. Microstrip proportional counters have now been variously reported having an energy resolution of better than 11 percent FWHM at 5.9 keV. They have been fabricated with anode bars down to 2 microns and on a variety of substrate materials including thin films which can be molded to different shapes. This review will examine the development of the microstrip detector with emphasis on the qualities which make this detector particularly interesting for use in astronomy.
Gated strip proportional detector
Morris, Christopher L.; Idzorek, George C.; Atencio, Leroy G.
1987-01-01
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10.sup.6. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
Gated strip proportional detector
Morris, C.L.; Idzorek, G.C.; Atencio, L.G.
1985-02-19
A gated strip proportional detector includes a gas tight chamber which encloses a solid ground plane, a wire anode plane, a wire gating plane, and a multiconductor cathode plane. The anode plane amplifies the amount of charge deposited in the chamber by a factor of up to 10/sup 6/. The gating plane allows only charge within a narrow strip to reach the cathode. The cathode plane collects the charge allowed to pass through the gating plane on a set of conductors perpendicular to the open-gated region. By scanning the open-gated region across the chamber and reading out the charge collected on the cathode conductors after a suitable integration time for each location of the gate, a two-dimensional image of the intensity of the ionizing radiation incident on the detector can be made.
NASA Astrophysics Data System (ADS)
Styron, Richard; Pagani, Marco; Garcia, Julio
2017-04-01
The region encompassing Central America and the Caribbean is tectonically complex, defined by the Caribbean plate's interactions with the North American, South American and Cocos plates. Though active deformation over much of the region has received at least cursory investigation the past 50 years, the area is chronically understudied and lacks a modern, synoptic characterization. Regardless, the level of risk in the region - as dramatically demonstrated by the 2010 Haiti earthquake - remains high because of high-vulnerability buildings and dense urban areas home to over 100 million people, who are concentrated near plate boundaries and other major structures. As part of a broader program to study seismic hazard worldwide, the Global Earthquake Model Foundation is currently working to quantify seismic hazard in the region. To this end, we are compiling a database of active faults throughout the region that will be integrated into similar models as recently done in South America. Our initial compilation hosts about 180 fault traces in the region. The faults show a wide range of characteristics, reflecting the diverse styles of plate boundary and plate-margin deformation observed. Regional deformation ranges from highly localized faulting along well-defined strike-slip faults to broad zones of distributed normal or thrust faulting, and from readily-observable yet slowly-slipping structures to inferred faults with geodetically-measured slip rates >10 mm/yr but essentially no geomorphic expression. Furthermore, primary structures such as the Motagua-Polochic Fault Zone (the strike-slip plate boundary between the North American and Caribbean plates in Guatemala) display strong along-strike slip rate gradients, and many other structures are undersea for most or all of their length. A thorough assessment of seismic hazard in the region will require the integration of a range of datasets and techniques and a comprehensive characterization of epistemic uncertainties driving
Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik
2016-11-01
Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R (2) > 0.98), low relative error (<0.1) and Willmott d-index (<0.95). The system could remove more than 97.5 % chemical oxygen demand (COD) from real pharmaceutical wastewater having initial COD value as high as 3500 mg/L while ensuring operation of the forward osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.
NASA Astrophysics Data System (ADS)
Carreau, J.; Naveau, P.; Neppel, L.
2017-05-01
The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).
A "mental models" approach to the communication of subsurface hydrology and hazards
NASA Astrophysics Data System (ADS)
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
Perspectives of widely scalable exposure models for multi-hazard global risk assessment
NASA Astrophysics Data System (ADS)
Pittore, Massimiliano; Haas, Michael; Wieland, Marc
2017-04-01
Less than 5% of eart&hacute;s surface is urbanized, and currently hosts around 7.5 billion people, with these figures constantly changing as increasingly faster urbanization takes place. A significant percentage of this population, often in economically developing countries, is exposed to different natural hazards which contribute to further raise the bar on the expected economic and social consequences. Global initiatives such as GAR 15 advocate for a wide scale, possibly global perspective on the assessment of risk arising from natural hazards, as a way to increase the risk-awareness of decision-makers and stakeholders, and to better harmonize large-scale prevention and mitigation actions. Realizing, and even more importantly maintaining a widely-scalable exposure model suited for the assessment of different natural risks would allow large-scale quantitative risk and loss assessment in a more efficient and reliable way. Considering its complexity and extent, such a task is undoubtedly a challenging one, spanning across multiple disciplines and operational contexts. On the other hand, with a careful design and an efficient and scalable implementation such endeavour would be well within reach and would contribute to significantly improve our understanding of the mechanisms lying behind what we call natural catastrophes. In this contribution we'll review existing relevant applications, will discuss how to tackle the most critical issues and will outline a road map for the implementation of global-scoped exposure models.
Combining Machine Learning and Mesoscale Modeling for Atmospheric Releases Hazard Assessment
NASA Astrophysics Data System (ADS)
Cervone, G.; Franzese, P.; Ezber, Y.; Boybeyi, Z.
2007-12-01
In applications such as homeland security and hazards response, it is necessary to know in real time which areas are most at risk from a potentially harmful atmospheric pollutant. Using high resolution remote sensing measurements and atmospheric mesoscale numerical models, it is possible to detect and study the transport and dispersion of particles with great accuracy, and to determine the ground concentrations which might pose a threat to people and properties. Satellite observations from different sensors must be fused together to compensate for different spatial, temporal and spectral resolutions and data availability. Such observations are used to initialize and validate atmospheric mesoscale models, which can provide accurate estimates of ground concentrations. Such numerical models are, however, usually slow due to the complex nature of the computations, and do not provide real time answers. We will define probability maps of risks by running several atmospheric mesoscale and T&D simulations spanning the climatological input conditions of an entire year, observed using high resolution remote sensing instruments. Such maps provide an immediate risk assessment area associated with a given source location. If a release indeed occurs, the computed risk maps can be used for first assessment and rapid response. We analyze the output of the mesoscale model runs using machine learning algorithms to find characteristic patterns which relate potential risk areas with atmospheric parameters which can be observed using remote sensing instruments and ground measurements. Therefore, when a release occurs, it is possible to give a quick hazard assessment without running a time consuming model, but by comparing the current atmospheric conditions with those associated with each identified risk area. The offline learning provides knowledge that can later be used to protect people and properties.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Lee, Saro; Shattri, Mansor
This paper deals with landslide hazard analysis and cross-application using Geographic Information System (GIS) and remote sensing data for Cameron Highland, Penang Island and Selangor in Malaysia. The aim of this study was to cross-apply and verify a spatial probabilistic model for landslide hazard analysis. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical/geological data and satellite images were collected and processed using GIS and image processing tools. There are ten landslide inducing parameters which are considered for the landslide hazard analysis. These parameters are topographic slope, aspect, curvature and distance from drainage, all derived from the topographic database; geology and distance from lineament, derived from the geologic database; landuse from Landsat satellite images; soil from the soil database; precipitation amount, derived from the rainfall database; and the vegetation index value from SPOT satellite images. These factors were analyzed using an artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide hazard maps were drawn for these three areas using artificial neural network model derived not only from the data for that area but also using the weight for each parameters, one of the statistical model, calculated from each of the other two areas (nine maps in all) as a cross-check of the validity of the method. For verification, the results of the analyses were compared, in each study area, with actual landslide locations. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.
A computationally efficient 2D hydraulic approach for global flood hazard modeling
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; Sanders, B. F.
2014-12-01
We present a physically-based flood hazard model that incorporates two main components: a hydrologic model and a hydraulic model. For hydrology we use TOPNET, a more comprehensive version of the original TOPMODEL. To simulate flood propagation, we use a 2D Godunov-type finite volume shallow water model. Physically-based global flood hazard simulation poses enormous computational challenges stemming from the increasingly fine resolution of available topographic data which represents the key input. Parallel computing helps to distribute the computational cost, but the computationally-intensive hydraulic model must be made far faster and agile for global-scale feasibility. Here we present a novel technique for hydraulic modeling whereby the computational grid is much coarser (e.g., 5-50 times) than the available topographic data, but the coarse grid retains the storage and conveyance (cross-sectional area) of the fine resolution data. This allows the 2D hydraulic model to be run on extremely large domains (e.g. thousands km2) with a single computational processor, and opens the door to global coverage with parallel computing. The model also downscales the coarse grid results onto the high-resolution topographic data to produce fine-scale predictions of flood depths and velocities. The model achieves computational speeds typical of very coarse grids while achieving an accuracy expected of a much finer resolution. In addition, the model has potential for assimilation of remotely sensed water elevations, to define boundary conditions based on water levels or river discharges and to improve model results. The model is applied to two river basins: the Susquehanna River in Pennsylvania, and the Ogeechee River in Florida. The two rivers represent different scales and span a wide range of topographic characteristics. Comparing spatial resolutions ranging between 30 m to 500 m in both river basins, the new technique was able to reduce simulation runtime by at least 25 fold
Corrales, Jone; Kristofco, Lauren A; Steele, W Baylor; Saari, Gavin N; Kostal, Jakub; Williams, E Spencer; Mills, Margaret; Gallagher, Evan P; Kavanagh, Terrance J; Simcox, Nancy; Shen, Longzhu Q; Melnikov, Fjodor; Zimmerman, Julie B; Voutchkova-Kostal, Adelina M; Anastas, Paul T; Brooks, Bryan W
2016-11-03
Sustainable molecular design of less hazardous chemicals presents a potentially transformative approach to protect public health and the environment. Relationships between molecular descriptors and toxicity thresholds previously identified the octanol-water distribution coefficient, log D, and the HOMO-LUMO energy gap, ΔE, as two useful properties in the identification of reduced aquatic toxicity. To determine whether these two property-based guidelines are applicable to sublethal oxidative stress (OS) responses, two common aquatic in vivo models, the fathead minnow (Pimephales promelas) and zebrafish (Danio rerio), were employed to examine traditional biochemical biomarkers (lipid peroxidation, DNA damage, and total glutathione) and antioxidant gene activation following exposure to eight structurally diverse industrial chemicals (bisphenol A, cumene hydroperoxide, dinoseb, hydroquinone, indene, perfluorooctanoic acid, R-(-)-carvone, and tert-butyl hydroperoxide). Bisphenol A, cumene hydroperoxide, dinoseb, and hydroquinone were consistent inducers of OS. Glutathione was the most consistently affected biomarker, suggesting its utility as a sensitivity response to support the design of less hazardous chemicals. Antioxidant gene expression (changes in nrf2, gclc, gst, and sod) was most significantly (p < 0.05) altered by R-(-)-carvone, cumene hydroperoxide, and bisphenol A. Results from the present study indicate that metabolism of parent chemicals and the role of their metabolites in molecular initiating events should be considered during the design of less hazardous chemicals. Current empirical and computational findings identify the need for future derivation of sustainable molecular design guidelines for electrophilic reactive chemicals (e.g., SN2 nucleophilic substitution and Michael addition reactivity) to reduce OS related adverse outcomes in vivo.
Socio-economic vulnerability to natural hazards - proposal for an indicator-based model
NASA Astrophysics Data System (ADS)
Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.
2012-04-01
Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually
Suzette Payne
2006-04-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Suzette Payne
2007-08-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Numerical Proportion Representation: A Neurocomputational Account.
Chen, Qi; Verguts, Tom
2017-01-01
Proportion representation is an emerging subdomain in numerical cognition. However, its nature and its correlation with simple number representation remain elusive, especially at the theoretical level. To fill this gap, we propose a gain-field model of proportion representation to shed light on the neural and computational basis of proportion representation. The model is based on two well-supported neuroscientific findings. The first, gain modulation, is a general mechanism for information integration in the brain; the second relevant finding is how simple quantity is neurally represented. Based on these principles, the model accounts for recent relevant proportion representation data at both behavioral and neural levels. The model further addresses two key computational problems for the cognitive processing of proportions: invariance and generalization. Finally, the model provides pointers for future empirical testing.
NASA Astrophysics Data System (ADS)
Grasso, S.; Maugeri, M.
rigorous complex methods of analysis or qualitative procedures. A semi quantitative procedure based on the definition of the geotechnical hazard index has been applied for the zonation of the seismic geotechnical hazard of the city of Catania. In particular this procedure has been applied to define the influence of geotechnical properties of soil in a central area of the city of Catania, where some historical buildings of great importance are sited. It was also performed an investigation based on the inspection of more than one hundred historical ecclesiastical buildings of great importance, located in the city. Then, in order to identify the amplification effects due to the site conditions, a geotechnical survey form was prepared, to allow a semi quantitative evaluation of the seismic geotechnical hazard for all these historical buildings. In addition, to evaluate the foundation soil time -history response, a 1-D dynamic soil model was employed for all these buildings, considering the non linearity of soil behaviour. Using a GIS, a map of the seismic geotechnical hazard, of the liquefaction hazard and a preliminary map of the seismic hazard for the city of Catania have been obtained. From the analysis of obtained results it may be noticed that high hazard zones are mainly clayey sites
Ao, Di; Song, Rong; Gao, Jin-Wu
2016-06-22
Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.
New approaches to modelling global patterns of landslide hazard and risk
NASA Astrophysics Data System (ADS)
Parker, Robert; Petley, David; Rosser, Nicholas; Oven, Katie; Densmore, Alexander
2010-05-01
Landslides are one of the most destructive geological processes, being a major cause of loss of life and economic damage. There is also evidence that their impact is increasing with time. Most landslides are triggered by either intense and/or prolonged rainfall, or seismic activity. Recent examples have highlighted the damage potential of multi-landslide events. For instance, the 12th May 2008 Wenchuan Earthquake (Sichuan, China) resulted in over 80,000 fatalities, with direct losses to buildings and infrastructure of over US150 billion. Over 20,000 fatalities and much of the economic losses sustained in this event were caused by the direct impact of landslides. Similarly, nearly all of the over 600 fatalities associated with the passage of Typhoon Morakot across Taiwan were caused by landslides. In recent years, there have been a number of global initiatives attempting to provide an assessment of the spatial distribution of landslide hazard and risk on a regional or global basis. However, to date the results have been somewhat unsatisfactory, failing to properly account for the real distribution of losses, notably limited by the completeness of the impact inventories upon which these models are based. This paper has two key aims. First, we use data from the Durham University fatal landslide database to demonstrate that existing global scale models do not effectively evaluate global landslide mortality risk. The fatal landslide database includes over 2,000 individual fatal landslide events over the period from September 2002 to the present, and gives insight into the potential underrepresentation of landslide impacts worldwide. Second, based upon this analysis, we develop a new first order spatial model for the distribution of fatal landslides on a global basis, using freely available global datasets. This resulting model, which for the first time properly accounts for the distribution of landslide hazard associated with seismically-triggered events in addition to
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-05-01
El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences - finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high resolution
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results
Issues in testing the new national seismic hazard model for Italy
NASA Astrophysics Data System (ADS)
Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.
2016-12-01
It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works
Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain
NASA Astrophysics Data System (ADS)
Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.
2015-04-01
were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.
A First Comparison of Multiple Probability Hazard Outputs from Three Global Flood Models
NASA Astrophysics Data System (ADS)
Trigg, M. A.; Bates, P. D.; Fewtrell, T. J.; Yamazaki, D.; Pappenberger, F.; Winsemius, H.
2014-12-01
With research advances in algorithms, remote sensing data sets and computing power, global flood models are now a practical reality. There are a number of different research models currently available or in development, and as these models mature and output becomes available for use, there is great interest in how these different models compare and how useful they may be at different scales. At the kick-off meeting of the Global Flood Partnership (GFP) in March 2014, the need to compare these new global flood models was identified as a research priority, both for developers of the models and users of the output. The Global Flood Partnership (GFP) is an informal network of scientists and practitioners from public, private and international organisations providing or using global flood monitoring, modelling and forecasting. (http://portal.gdacs.org/Global-Flood-Partnership). On behalf of the GFP, The Willis Research Network is undertaking this comparison research and the work presented here is the result of the first phase of this comparison for three models; CaMa-Flood, GLOFRIS & ECMWF. The comparison analysis is undertaken for the entire African continent, identified by GFP members as the best location to facilitate data sharing by model teams and where there was the most interest from potential users of the model outputs. Initial analysis results include flooded area for a range of hazard return periods (25, 50, 100, 250, 500, 1000 years) and this is also compared against catchment sizes and climatic zone. Results will be discussed in the context of the different model structures and input data used, while also addressing scale issues and practicalities of use. Finally, plans for the validation of the models against microwave and optical remote sensing data will be outlined.
Model uncertainties of the 2002 update of California seismic hazard maps
Cao, T.; Petersen, M.D.; Frankel, A.D.
2005-01-01
In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.
NASA Astrophysics Data System (ADS)
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
NASA Astrophysics Data System (ADS)
Turchaninova, A.
2012-04-01
The estimation of extreme avalanche runout distances, flow velocities, impact pressures and volumes is an essential part of snow engineering in mountain regions of Russia. It implies the avalanche hazard assessment and mapping. Russian guidelines accept the application of different avalanche models as well as approaches for the estimation of model input parameters. Consequently different teams of engineers in Russia apply various dynamics and statistical models for engineering practice. However it gives more freedom to avalanche practitioners and experts but causes lots of uncertainties in case of serious limitations of avalanche models. We discuss these problems by presenting the application results of different well known and widely used statistical (developed in Russia) and avalanche dynamics models for several avalanche test sites in the Khibini Mountains (The Kola Peninsula) and the Caucasus. The most accurate and well-documented data from different powder and wet, big rare and small frequent snow avalanche events is collected from 1960th till today in the Khibini Mountains by the Avalanche Safety Center of "Apatit". This data was digitized and is available for use and analysis. Then the detailed digital avalanche database (GIS) was created for the first time. It contains contours of observed avalanches (ESRI shapes, more than 50 years of observations), DEMs, remote sensing data, description of snow pits, photos etc. Thus, the Russian avalanche data is a unique source of information for understanding of an avalanche flow rheology and the future development and calibration of the avalanche dynamics models. GIS database was used to analyze model input parameters and to calibrate and verify avalanche models. Regarding extreme dynamic parameters the outputs using different models can differ significantly. This is unacceptable for the engineering purposes in case of the absence of the well-defined guidelines in Russia. The frequency curves for the runout distance
Impact of a refined airborne LiDAR stochastic model for natural hazard applications
NASA Astrophysics Data System (ADS)
Glennie, C. L.; Bolkas, D.; Fotopoulos, G.
2016-12-01
Airborne Light Detection and Ranging (LiDAR) is often employed to derive multi-temporal Digital Elevation Models (DEMs), that are used to estimate vertical displacement resulting from natural hazards such as landslides, rockfalls and erosion. Vertical displacements are estimated by computing the difference between two DEMs separated by a specified time period and applying a threshold to remove the inherent noise. Thus, reliable information about the accuracy of DEMs is essential. The assessment of airborne LiDAR errors is typically based on (i) independent ground control points (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR measurements. Furthermore, it provides the user with point-by-point accuracy estimation. In this study, a refined stochastic model is obtained through variance component estimation (VCE) for a dataset in Houston, Texas. Results show that initial stochastic information was optimistic by 35% for both horizontal coordinates and ellipsoidal heights. To assess the impact of a refined stochastic model, surface displacement simulations are evaluated. The simulations include scenarios with topographic slopes that vary from 10º to 60º, and vertical displacement of ±1 to ±5 m. Results highlight the cases where a reliable stochastic model is important. A refined stochastic model can be used in practical applications for determining appropriate noise thresholds in vertical displacement, improve quantitative analysis, and enhance relevant decision-making.
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Ned; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
Percival, Matthew W; Zisser, Howard; Jovanovic, Lois; Doyle, Francis J
2008-07-01
Using currently available technology, it is possible to apply modern control theory to produce a closed-loop artificial beta cell. Novel use of established control techniques would improve glycemic control, thereby reducing the complications of diabetes. Two popular controller structures, proportional-integral-derivative (PID) and model predictive control (MPC), are compared first in a theoretical sense and then in two applications. The Bergman model is transformed for use in a PID equivalent model-based controller. The internal model control (IMC) structure, which makes explicit use of the model, is compared with the PID controller structure in the transfer function domain. An MPC controller is then developed as an optimization problem with restrictions on its tuning parameters and is shown to be equivalent to an IMC controller. The controllers are tuned for equivalent performance and evaluated in a simulation study as a closed-loop controller and in an advisory mode scenario on retrospective clinical data. Theoretical development shows conditions under which PID and MPC controllers produce equivalent output via IMC. The simulation study showed that the single tuning parameter for the equivalent controllers relates directly to the closed-loop speed of response and robustness, an important result considering system uncertainty. The risk metric allowed easy identification of instances of inadequate control. Results of the advisory mode simulation showed that suitable tuning produces consistently appropriate delivery recommendations. The conditions under which PID and MPC are equivalent have been derived. The MPC framework is more suitable given the extensions necessary for a fully closed-loop artificial beta cell, such as consideration of controller constraints. Formulation of the control problem in risk space is attractive, as it explicitly addresses the asymmetry of the problem; this is done easily with MPC.
Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves
2013-01-01
Background Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. Methods A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose–insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70–180 mg/dl). Results Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. Conclusion The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. PMID:23759403
Uncertainty quantification in satellite-driven modeling to forecast lava flow hazards
NASA Astrophysics Data System (ADS)
Ganci, Gaetana; Bilotta, Giuseppe; Cappello, Annalisa; Herault, Alexis; Zago, Vito; Del Negro, Ciro
2016-04-01
Over the last decades satellite-based remote sensing and data processing techniques have proved well suited to complement field observations to provide timely event detection for volcanic effusive events, as well as extraction of parameters allowing lava flow tracking. In parallel with this, physics-based models for lava flow simulations have improved enormously and are now capable of fast, accurate simulations, which are increasingly driven by, or validated using, satellite-derived parameters such as lava flow discharge rates. Together, these capabilities represent a prompt strategy with immediate applications to the real time monitoring and hazard assessment of effusive eruptions, but two important key issues still need to be addressed, to improve its effectiveness: (i) the provision of source term parameters and their uncertainties, (ii) how uncertainties in source terms propagate into the model outputs. We here address these topics considering uncertainties in satellite-derived products obtained by the HOTSAT thermal monitoring system (e.g. hotspot pixels, radiant heat flux, effusion rate) and evaluating how these uncertainties affect lava flow hazard scenarios by inputting them into the MAGFLOW physics-based model for lava flow simulations. Particular attention is given to topography and cloud effect on satellite-derived products as well as to the frequency of their acquisitions (GEO vs LEO). We also investigate how the DEM resolution impact final scenarios from both the numerical and physical points of view. To evaluate these effects, three different kinds of well documented eruptions occurred at Mt Etna are taken into account: a short-lived paroxysmal event, i.e. the 11-13 Jan 2011 lava fountain, a long lasting eruption, i.e. the 2008-2009 eruption, and a short effusive event, i.e. the 14-24 July 2006 eruption.
Lava Flow Hazard Modeling during the 2014-2015 Fogo eruption, Cape Verde
NASA Astrophysics Data System (ADS)
Del Negro, C.; Cappello, A.; Ganci, G.; Calvari, S.; Perez, N. M.; Hernandez Perez, P. A.; Victoria, S. S.; Cabral, J.
2015-12-01
Satellite remote sensing techniques and lava flow forecasting models have been combined to allow an ensemble response during effusive crises at poorly monitored volcanoes. Here, we use the HOTSAT volcano hot spot detection system that works with satellite thermal infrared data and the MAGFLOW lava flow emplacement model that considers the way in which effusion rate changes during an eruption, to forecast lava flow hazards during the 2014-2015 Fogo eruption. In many ways this was one of the major effusive eruption crises of recent years, since the lava flows actually invaded populated areas. HOTSAT is used to promptly analyze MODIS and SEVIRI data to output hot spot location, lava thermal flux, and effusion rate estimation. We use this output to drive the MAGFLOW simulations of lava flow paths and to update continuously flow simulations. Satellite-derived TADR estimates can be obtained in real time and lava flow simulations of several days of eruption can be calculated in a few minutes, thus making such a combined approach of paramount importance to provide timely forecasts of the areas that a lava flow could possibly inundate. In addition, such forecasting scenarios can be continuously updated in response to changes in the eruptive activity as detected by satellite imagery. We also show how Landsat-8 OLI and EO-1 ALI images complement the field observations for tracking the flow front position through time, and add considerable data on lava flow advancement to validate the results of numerical simulations. Our results thus demonstrate how the combination of satellite remote sensing and lava flow modeling can be effectively used during eruptive crises to produce realistic lava flow hazard scenarios and for assisting local authorities in making decisions during a volcanic eruption.
NASA Astrophysics Data System (ADS)
Hayes, P.; Trigg, J. L.; Stauffer, D.; Hunter, G.; McQueen, J.
2006-05-01
Consequence assessment (CA) operations are those processes that attempt to mitigate negative impacts of incidents involving hazardous materials such as chemical, biological, radiological, nuclear, and high explosive (CBRNE) agents, facilities, weapons, or transportation. Incident types range from accidental spillage of chemicals at/en route to/from a manufacturing plant, to the deliberate use of radiological or chemical material as a weapon in a crowded city. The impacts of these incidents are highly variable, from little or no impact to catastrophic loss of life and property. Local and regional scale atmospheric conditions strongly influence atmospheric transport and dispersion processes in the boundary layer, and the extent and scope of the spread of dangerous materials in the lower levels of the atmosphere. Therefore, CA personnel charged with managing the consequences of CBRNE incidents must have detailed knowledge of current and future weather conditions to accurately model potential effects. A meteorology team was established at the U.S. Defense Threat Reduction Agency (DTRA) to provide weather support to CA personnel operating DTRA's CA tools, such as the Hazard Prediction and Assessment Capability (HPAC) tool. The meteorology team performs three main functions: 1) regular provision of meteorological data for use by personnel using HPAC, 2) determination of the best performing medium-range model forecast for the 12 - 48 hour timeframe and 3) provision of real-time help-desk support to users regarding acquisition and use of weather in HPAC CA applications. The normal meteorology team operations were expanded during a recent modeling project which took place during the 2006 Winter Olympic Games. The meteorology team took advantage of special weather observation datasets available in the domain of the Winter Olympic venues and undertook a project to improve weather modeling at high resolution. The varied and complex terrain provided a special challenge to the
NASA Astrophysics Data System (ADS)
Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.
2010-05-01
Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Numerical modelling for real-time forecasting of marine oil pollution and hazard assessment
NASA Astrophysics Data System (ADS)
De Dominicis, Michela; Pinardi, Nadia; Bruciaferri, Diego; Liubartseva, Svitlana
2015-04-01
(MEDESS4MS) system, which is an integrated operational multi-model oil spill prediction service, that can be used by different users to run simulations of oil spills at sea, even in real time, through a web portal. The MEDESS4MS system gathers different oil spill modelling systems and data from meteorological and ocean forecasting systems, as well as operational information on response equipment, together with environmental and socio-economic sensitivity maps. MEDSLIK-II has been also used to provide an assessment of hazard stemming from operational oil ship discharges in the Southern Adriatic and Northern Ionian (SANI) Seas. Operational pollution resulting from ships consists of a movable hazard with a magnitude that changes dynamically as a result of a number of external parameters varying in space and time (temperature, wind, sea currents). Simulations of oil releases have been performed with realistic oceanographic currents and the results show that the oil pollution hazard distribution has an inherent spatial and temporal variability related to the specific flow field variability.
Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.
NASA Astrophysics Data System (ADS)
Malet, Jean-Philippe; Remaître, Alexandre
2015-04-01
Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance
NASA Technical Reports Server (NTRS)
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
NASA Technical Reports Server (NTRS)
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
Modeling the combustion behavior of hazardous waste in a rotary kiln incinerator.
Yang, Yongxiang; Pijnenborg, Marc J A; Reuter, Markus A; Verwoerd, Joep
2005-01-01
Hazardous wastes have complex physical forms and chemical compositions and are normally incinerated in rotary kilns for safe disposal and energy recovery. In the rotary kiln, the multifeed stream and wide variation of thermal, physical, and chemical properties of the wastes cause the incineration system to be highly heterogeneous, with severe temperature fluctuations and unsteady combustion chemistry. Incomplete combustion is often the consequence, and the process is difficult to control. In this article, modeling of the waste combustion is described by using computational fluid dynamics (CFD). Through CFD simulation, gas flow and mixing, turbulent combustion, and heat transfer inside the incinerator were predicted and visualized. As the first step, the waste in various forms was modeled to a hydrocarbon-based virtual fuel mixture. The combustion of the simplified waste was then simulated with a seven-gas combustion model within a CFD framework. Comparison was made with previous global three-gas combustion model with which no chemical behavior can be derived. The distribution of temperature and chemical species has been investigated. The waste combustion model was validated with temperature measurements. Various operating conditions and the influence on the incineration performance were then simulated. Through this research, a better process understanding and potential optimization of the design were attained.
NASA Astrophysics Data System (ADS)
Peresan, A.; Panza, G. F.; Sabadini, R.; Barzaghi, R.; Amodio, A.; Bianco, G.
2009-12-01
A new approach to seismic hazard assessment is illustrated that, based on the available knowledge of the physical properties of the Earth structure and of seismic sources, as well as on the geophysical forward modeling, allows for a time dependent definition of the seismic input. According to the proposed approach, a fully formalized system integrating Earth Observation data and new advanced methods in seismological and geophysical data analysis, is currently under development in the framework of the Pilot Project SISMA, funded by the Italian Space Agency (ASI). The synergic use of geodetic Earth Observation data (EO) and Geophysical Forward Modeling (GFM) deformation maps at the national scale complements the space and time dependent information provided by real-time monitoring of seismic flow (performed by means of the earthquake prediction algorithms CN and M8S), so as to permit the identification and routine updating of alerted areas. At the small spatial scale (tens of km) of the seismogenic nodes identified by pattern recognition analysis, both GNSS (Global Navigation Satellite System) and SAR (Synthetic Aperture Radar) techniques, coupled with expressly developed models for inter-seismic phases, allow us to retrieve the deformation style and stress evolution within the seismogenic areas. The displacements fields obtained from EO data provide the input for the geophysical modeling, which permits to indicate whether a specific fault is in a "critical state". The scenarios of expected ground motion, associated with the alerted areas are then defined by means of full waveforms modeling, based on the possibility to compute synthetic seismograms by the modal summation technique. In this way a set of deterministic scenarios of ground motion, which refers to the time interval when a strong event is likely to occur within the alerted area, can be defined either at national and local scale. The considered integrated approach opens new routes in understanding the
Risk assessment framework of fate and transport models applied to hazardous waste sites
Hwang, S.T.
1993-06-01
Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary.
NASA Astrophysics Data System (ADS)
Patra, A. K.; Connor, C.; Webley, P.; Jones, M.; Charbonnier, S. J.; Connor, L.; Gallo, S.; Bursik, M. I.; Valentine, G.; Hughes, C. G.; Aghakhani, H.; Renschler, C. S.; Kosar, T.
2014-12-01
We report here on an effort to improve the sustainability, robustness and usability of the core modeling and simulation tools housed in the collaboratory VHub.org and used in the study of complex volcanic behavior. In particular, we focus on tools that support large scale mass flows (TITAN2D), ash deposition/transport and dispersal (Tephra2 and PUFF), and lava flows (Lava2). These tools have become very popular in the community especially due to the availability of an online usage modality. The redevelopment of the tools ot take advantage of new hardware and software advances was a primary thrust for the effort. However, as we start work we have reoriented the effort to also take advantage of significant new opportunities for supporting the complex workflows and use of distributed data resources that will enable effective and efficient hazard analysis.
Modelling the impacts of coastal hazards on land-use development
NASA Astrophysics Data System (ADS)
Ramirez, J.; Vafeidis, A. T.
2009-04-01
Approximately 10% of the world's population live in close proximity to the coast and are potentially susceptible to tropical or extra-tropical storm-surge events. These events will be exacerbated by projected sea-level rise (SLR) in the 21st century. Accelerated SLR is one of the more certain impacts of global warming and can have major effects on humans and ecosystems. Of particular vulnerability are densely populated coastal urban centres containing globally important commercial resources, with assets in the billions USD. Moreover, the rates of growth of coastal populations, which are reported to be growing faster than the global means, are leading to increased human exposure to coastal hazards. Consequently, potential impacts of coastal hazards can be significant in the future and will depend on various factors but actual impacts can be considerably reduced by appropriate human decisions on coastal land-use management. At the regional scale, it is therefore necessary to identify which coastal areas are vulnerable to these events and explore potential long-term responses reflected in land usage. Land-use change modelling is a technique which has been extensively used in recent years for studying the processes and mechanisms that govern the evolution of land use and which can potentially provide valuable information related to the future coastal development of regions that are vulnerable to physical forcings. Although studies have utilized land-use classification maps to determine the impact of sea-level rise, few use land-use projections to make these assessments, and none have considered adaptive behaviour of coastal dwellers exposed to hazards. In this study a land-use change model, which is based on artificial neural networks (ANN), was employed for predicting coastal urban and agricultural development. The model uses as inputs a series of spatial layers, which include information on population distribution, transportation networks, existing urban centres, and
Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation
NASA Astrophysics Data System (ADS)
Borga, M.; Creutin, J. D.
Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two
NASA Astrophysics Data System (ADS)
Chiang, S. H.; Chang, K. T.; Chen, Y. C.; Chen, C. F.
2014-12-01
The study proposes an integrated landslide-runout model, iLIR-w (Integrated Landslide Initiation prediction and landslide Runout simulation at Watershed level), to assess landslide hazard affected by typhoon. For rainfall-induced landslides, many landslide model have focused on the prediction of landslide locations, but few have incorporated the prediction of landslide timing and landslide runouts in one single modeling framework. iLIR-w combines an integrated landslide model for predicting shallow landslides and a watershed-scale runout simulation to simulate the coupled processes related to landslide hazard. The study developed the model in a watershed in southern Taiwan, by using landslide inventories prepared after eight historical typhoon events (2001-2008). The study then tested iLIR-w by incorporating typhoon rainfall forecasts from the Taiwan Cooperative Precipitation Ensemble Forecast Experiment (TAPEX) to practice landslide hazard early warning of 6 h, 12 h, 24 h, 48 h before the arrival of Typhoon Morakot which seriously damaged Southern Taiwan in 2009. The model performs reasonably well in the prediction of landslide locations, timing and runouts. Therefore, the model is expected to be useful for landslide hazard prevention, and can be applied to other watersheds with similar environment, assuming that reliable model parameters are available.
NASA Astrophysics Data System (ADS)
Allen, S. K.; Schneider, D.; Owens, I. F.
2009-03-01
Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for