King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Tracking Temporal Hazard in the Human Electroencephalogram Using a Forward Encoding Model
2018-01-01
Abstract Human observers automatically extract temporal contingencies from the environment and predict the onset of future events. Temporal predictions are modeled by the hazard function, which describes the instantaneous probability for an event to occur given it has not occurred yet. Here, we tackle the question of whether and how the human brain tracks continuous temporal hazard on a moment-to-moment basis, and how flexibly it adjusts to strictly implicit variations in the hazard function. We applied an encoding-model approach to human electroencephalographic data recorded during a pitch-discrimination task, in which we implicitly manipulated temporal predictability of the target tones by varying the interval between cue and target tone (i.e. the foreperiod). Critically, temporal predictability either was driven solely by the passage of time (resulting in a monotonic hazard function) or was modulated to increase at intermediate foreperiods (resulting in a modulated hazard function with a peak at the intermediate foreperiod). Forward-encoding models trained to predict the recorded EEG signal from different temporal hazard functions were able to distinguish between experimental conditions, showing that implicit variations of temporal hazard bear tractable signatures in the human electroencephalogram. Notably, this tracking signal was reconstructed best from the supplementary motor area, underlining this area’s link to cognitive processing of time. Our results underline the relevance of temporal hazard to cognitive processing and show that the predictive accuracy of the encoding-model approach can be utilized to track abstract time-resolved stimuli. PMID:29740594
Asano, Junichi; Hirakawa, Akihiro
2017-01-01
The Cox proportional hazards cure model is a survival model incorporating a cure rate with the assumption that the population contains both uncured and cured individuals. It contains a logistic regression for the cure rate, and a Cox regression to estimate the hazard for uncured patients. A single predictive model for both the cure and hazard can be developed by using a cure model that simultaneously predicts the cure rate and hazards for uncured patients; however, model selection is a challenge because of the lack of a measure for quantifying the predictive accuracy of a cure model. Recently, we developed an area under the receiver operating characteristic curve (AUC) for determining the cure rate in a cure model (Asano et al., 2014), but the hazards measure for uncured patients was not resolved. In this article, we propose novel C-statistics that are weighted by the patients' cure status (i.e., cured, uncured, or censored cases) for the cure model. The operating characteristics of the proposed C-statistics and their confidence interval were examined by simulation analyses. We also illustrate methods for predictive model selection and for further interpretation of variables using the proposed AUCs and C-statistics via application to breast cancer data.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2005-01-01
The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.
Spatial prediction of landslide hazard using discriminant analysis and GIS
Peter V. Gorsevski; Paul Gessler; Randy B. Foltz
2000-01-01
Environmental attributes relevant for spatial prediction of landslides triggered by rain and snowmelt events were derived from digital elevation model (DEM). Those data in conjunction with statistics and geographic information system (GIS) provided a detailed basis for spatial prediction of landslide hazard. The spatial prediction of landslide hazard in this paper is...
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Agent-based simulation for human-induced hazard analysis.
Bulleit, William M; Drewek, Matthew W
2011-02-01
Terrorism could be treated as a hazard for design purposes. For instance, the terrorist hazard could be analyzed in a manner similar to the way that seismic hazard is handled. No matter how terrorism is dealt with in the design of systems, the need for predictions of the frequency and magnitude of the hazard will be required. And, if the human-induced hazard is to be designed for in a manner analogous to natural hazards, then the predictions should be probabilistic in nature. The model described in this article is a prototype model that used agent-based modeling (ABM) to analyze terrorist attacks. The basic approach in this article of using ABM to model human-induced hazards has been preliminarily validated in the sense that the attack magnitudes seem to be power-law distributed and attacks occur mostly in regions where high levels of wealth pass through, such as transit routes and markets. The model developed in this study indicates that ABM is a viable approach to modeling socioeconomic-based infrastructure systems for engineering design to deal with human-induced hazards. © 2010 Society for Risk Analysis.
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0.17 when only one hazard is considered and a score of 0.37 when multiple hazards are considered simultaneously. The LHIs with the most predictive skill were ‘Inundation depth’ and ‘Wave attack’. The Bayesian Network approach has several advantages over the market-standard stage-damage functions: the predictive capacity of multiple indicators can be combined; probabilistic predictions can be obtained, which include uncertainty; and quantitative as well as descriptive information can be used simultaneously.
Savonitto, Stefano; Morici, Nuccia; Nozza, Anna; Cosentino, Francesco; Perrone Filardi, Pasquale; Murena, Ernesto; Morocutti, Giorgio; Ferri, Marco; Cavallini, Claudio; Eijkemans, Marinus Jc; Stähli, Barbara E; Schrieks, Ilse C; Toyama, Tadashi; Lambers Heerspink, H J; Malmberg, Klas; Schwartz, Gregory G; Lincoff, A Michael; Ryden, Lars; Tardif, Jean Claude; Grobbee, Diederick E
2018-01-01
To define the predictors of long-term mortality in patients with type 2 diabetes mellitus and recent acute coronary syndrome. A total of 7226 patients from a randomized trial, testing the effect on cardiovascular outcomes of the dual peroxisome proliferator-activated receptor agonist aleglitazar in patients with type 2 diabetes mellitus and recent acute coronary syndrome (AleCardio trial), were analysed. Median follow-up was 2 years. The independent mortality predictors were defined using Cox regression analysis. The predictive information provided by each variable was calculated as percent of total chi-square of the model. All-cause mortality was 4.0%, with cardiovascular death contributing for 73% of mortality. The mortality prediction model included N-terminal proB-type natriuretic peptide (adjusted hazard ratio = 1.68; 95% confidence interval = 1.51-1.88; 27% of prediction), lack of coronary revascularization (hazard ratio = 2.28; 95% confidence interval = 1.77-2.93; 18% of prediction), age (hazard ratio = 1.04; 95% confidence interval = 1.02-1.05; 15% of prediction), heart rate (hazard ratio = 1.02; 95% confidence interval = 1.01-1.03; 10% of prediction), glycated haemoglobin (hazard ratio = 1.11; 95% confidence interval = 1.03-1.19; 8% of prediction), haemoglobin (hazard ratio = 1.01; 95% confidence interval = 1.00-1.02; 8% of prediction), prior coronary artery bypass (hazard ratio = 1.61; 95% confidence interval = 1.11-2.32; 7% of prediction) and prior myocardial infarction (hazard ratio = 1.40; 95% confidence interval = 1.05-1.87; 6% of prediction). In patients with type 2 diabetes mellitus and recent acute coronary syndrome, mortality prediction is largely dominated by markers of cardiac, rather than metabolic, dysfunction.
NASA Astrophysics Data System (ADS)
Fan, Linfeng; Lehmann, Peter; McArdell, Brian; Or, Dani
2017-03-01
Debris flows and landslides induced by heavy rainfall represent an ubiquitous and destructive natural hazard in steep mountainous regions. For debris flows initiated by shallow landslides, the prediction of the resulting pathways and associated hazard is often hindered by uncertainty in determining initiation locations, volumes and mechanical state of the mobilized debris (and by model parameterization). We propose a framework for linking a simplified physically-based debris flow runout model with a novel Landslide Hydro-mechanical Triggering (LHT) model to obtain a coupled landslide-debris flow susceptibility and hazard assessment. We first compared the simplified debris flow model of Perla (1980) with a state-of-the art continuum-based model (RAMMS) and with an empirical model of Rickenmann (1999) at the catchment scale. The results indicate that predicted runout distances by the Perla model are in reasonable agreement with inventory measurements and with the other models. Predictions of localized shallow landslides by LHT model provides information on water content of released mass. To incorporate effects of water content and flow viscosity as provided by LHT on debris flow runout, we adapted the Perla model. The proposed integral link between landslide triggering susceptibility quantified by LHT and subsequent debris flow runout hazard calculation using the adapted Perla model provides a spatially and temporally resolved framework for real-time hazard assessment at the catchment scale or along critical infrastructure (roads, railroad lines).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less
Harper, Bryan; Thomas, Dennis G.; Chikkagoudar, Satish; ...
2015-06-04
The integration of rapid assays, large data sets, informatics and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality, were established at realistic exposure levels and used to develop a predictive model of nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both core composition and outermost surface chemistrymore » of nanomaterials. The resulting clusters guided the development of a predictive model of gold nanoparticle toxicity to embryonic zebrafish. In addition, our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. These findings reveal the need to expeditiously increase the availability of quantitative measures of nanomaterial hazard and broaden the sharing of that data and knowledge to support predictive modeling. In addition, research should continue to focus on methodologies for developing predictive models of nanomaterial hazard based on sub-lethal responses to low dose exposures.« less
Karin Riley; Matthew Thompson; Peter Webley; Kevin D. Hyde
2017-01-01
Modeling has been used to characterize and map natural hazards and hazard susceptibility for decades. Uncertainties are pervasive in natural hazards analysis, including a limited ability to predict where and when extreme events will occur, with what consequences, and driven by what contributing factors. Modeling efforts are challenged by the intrinsic...
NASA Astrophysics Data System (ADS)
Lane, E. M.; Gillibrand, P. A.; Wang, X.; Power, W.
2013-09-01
Regional source tsunamis pose a potentially devastating hazard to communities and infrastructure on the New Zealand coast. But major events are very uncommon. This dichotomy of infrequent but potentially devastating hazards makes realistic assessment of the risk challenging. Here, we describe a method to determine a probabilistic assessment of the tsunami hazard by regional source tsunamis with an "Average Recurrence Interval" of 2,500-years. The method is applied to the east Auckland region of New Zealand. From an assessment of potential regional tsunamigenic events over 100,000 years, the inundation of the Auckland region from the worst 100 events is modelled using a hydrodynamic model and probabilistic inundation depths on a 2,500-year time scale were determined. Tidal effects on the potential inundation were included by coupling the predicted wave heights with the probability density function of tidal heights at the inundation site. Results show that the more exposed northern section of the east coast and outer islands in the Hauraki Gulf face the greatest hazard from regional tsunamis in the Auckland region. Incorporating tidal effects into predictions of inundation reduced the predicted hazard compared to modelling all the tsunamis arriving at high tide giving a more accurate hazard assessment on the specified time scale. This study presents the first probabilistic analysis of dynamic modelling of tsunami inundation for the New Zealand coast and as such provides the most comprehensive assessment of tsunami inundation of the Auckland region from regional source tsunamis available to date.
Caraviello, D Z; Weigel, K A; Gianola, D
2004-05-01
Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.
Engineering Geology | Alaska Division of Geological & Geophysical Surveys
Tidal Datum Portal Climate and Cryosphere Hazards Coastal Hazards Program Guide to Geologic Hazards in Tidal Datum Portal Climate and Cryosphere Hazards Coastal Hazards Program Guide to Geologic Hazards in Highway and development of avalanche susceptibility and prediction models near Atigun Pass. Alaska coastal
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with the critical information they need to respond quickly and efficiently and to increase public safety and mitigate damage associated with powerful coastal storms. For instance, high resolution local models will predict detailed wave heights, breaking patterns, and current strengths for use in warning systems for harbor-mouth navigation and densely populated coastal regions where beach safety is threatened. The offline applications are intended to equip coastal managers with the information needed to manage and allocate their resources effectively to protect sections of coast that may be most vulnerable to future severe storms.
Autonomous Soil Assessment System: A Data-Driven Approach to Planetary Mobility Hazard Detection
NASA Astrophysics Data System (ADS)
Raimalwala, K.; Faragalli, M.; Reid, E.
2018-04-01
The Autonomous Soil Assessment System predicts mobility hazards for rovers. Its development and performance are presented, with focus on its data-driven models, machine learning algorithms, and real-time sensor data fusion for predictive analytics.
Dynamic wake prediction and visualization with uncertainty analysis
NASA Technical Reports Server (NTRS)
Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)
2005-01-01
A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.
Risk Management and Physical Modelling for Mountainous Natural Hazards
NASA Astrophysics Data System (ADS)
Lehning, Michael; Wilhelm, Christian
Population growth and climate change cause rapid changes in mountainous regions resulting in increased risks of floods, avalanches, debris flows and other natural hazards. Xevents are of particular concern, since attempts to protect against them result in exponentially growing costs. In this contribution, we suggest an integral risk management approach to dealing with natural hazards that occur in mountainous areas. Using the example of a mountain pass road, which can be protected from the danger of an avalanche by engineering (galleries) and/or organisational (road closure) measures, we show the advantage of an optimal combination of both versus the traditional approach, which is to rely solely on engineering structures. Organisational measures become especially important for Xevents because engineering structures cannot be designed for those events. However, organisational measures need a reliable and objective forecast of the hazard. Therefore, we further suggest that such forecasts should be developed using physical numerical modelling. We present the status of current approaches to using physical modelling to predict snow cover stability for avalanche warnings and peak runoff from mountain catchments for flood warnings. While detailed physical models can already predict peak runoff reliably, they are only used to support avalanche warnings. With increased process knowledge and computer power, current developments should lead to a enhanced role for detailed physical models in natural mountain hazard prediction.
Kim, Cheol-Hee; Park, Jin-Ho; Park, Cheol-Jin; Na, Jin-Gyun
2004-03-01
The Chemical Accidents Response Information System (CARIS) was developed at the Center for Chemical Safety Management in South Korea in order to track and predict the dispersion of hazardous chemicals in the case of an accident or terrorist attack involving chemical companies. The main objective of CARIS is to facilitate an efficient emergency response to hazardous chemical accidents by rapidly providing key information in the decision-making process. In particular, the atmospheric modeling system implemented in CARIS, which is composed of a real-time numerical weather forecasting model and an air pollution dispersion model, can be used as a tool to forecast concentrations and to provide a wide range of assessments associated with various hazardous chemicals in real time. This article introduces the components of CARIS and describes its operational modeling system. Some examples of the operational modeling system and its use for emergency preparedness are presented and discussed. Finally, this article evaluates the current numerical weather prediction model for Korea.
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Multivariate Models for Prediction of Human Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2016-01-01
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324
NASA Astrophysics Data System (ADS)
Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani
2014-05-01
Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.
Multivariate Models for Prediction of Human Skin Sensitization ...
One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Rosin-Rammler Distributions in ANSYS Fluent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunham, Ryan Q.
In Health Physics monitoring, particles need to be collected and tracked. One method is to predict the motion of potential health hazards with computer models. Particles released from various sources within a glove box can become a respirable health hazard if released into the area surrounding a glove box. The goal of modeling the aerosols in a glove box is to reduce the hazards associated with a leak in the glove box system. ANSYS Fluent provides a number of tools for modeling this type of environment. Particles can be released using injections into the flow path with turbulent properties. Themore » models of particle tracks can then be used to predict paths and concentrations of particles within the flow. An attempt to understand and predict the handling of data by Fluent was made, and results iteratively tracked. Trends in data were studied to comprehend the final results. The purpose of the study was to allow a better understanding of the operation of Fluent for aerosol modeling for future application in many fields.« less
Modelling of a spread of hazardous substances in a Floreon+ system
NASA Astrophysics Data System (ADS)
Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David
2017-07-01
This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.
How well should probabilistic seismic hazard maps work?
NASA Astrophysics Data System (ADS)
Vanneste, K.; Stein, S.; Camelbeeck, T.; Vleminckx, B.
2016-12-01
Recent large earthquakes that gave rise to shaking much stronger than shown in earthquake hazard maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSHA model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating the shaking history of an area with assumed distribution of earthquakes, frequency-magnitude relation, temporal occurrence model, and ground-motion prediction equation. We compare the "observed" shaking at many sites over time to that predicted by a hazard map generated for the same set of parameters. PSHA predicts that the fraction of sites at which shaking will exceed that mapped is p = 1 - exp(t/T), where t is the duration of observations and T is the map's return period. This implies that shaking in large earthquakes is typically greater than shown on hazard maps, as has occurred in a number of cases. A large number of simulated earthquake histories yield distributions of shaking consistent with this forecast, with a scatter about this value that decreases as t/T increases. The median results are somewhat lower than predicted for small values of t/T and approach the predicted value for larger values of t/T. Hence, the algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. Validation is more complicated because a real observed earthquake history can yield a fractional exceedance significantly higher or lower than that predicted while still being consistent with the hazard map in question. As a result, given that in the real world we have only a single sample, it is hard to assess whether a misfit between a map and observations arises by chance or reflects a biased map.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
NASA Astrophysics Data System (ADS)
Kritikos, Theodosios; Robinson, Tom R.; Davies, Tim R. H.
2015-04-01
Currently, regional coseismic landslide hazard analyses require comprehensive historical landslide inventories as well as detailed geotechnical data. Consequently, such analyses have not been possible where these data are not available. A new approach is proposed herein to assess coseismic landslide hazard at regional scale for specific earthquake scenarios in areas without historical landslide inventories. The proposed model employs fuzzy logic and geographic information systems to establish relationships between causative factors and coseismic slope failures in regions with well-documented and substantially complete coseismic landslide inventories. These relationships are then utilized to estimate the relative probability of landslide occurrence in regions with neither historical landslide inventories nor detailed geotechnical data. Statistical analyses of inventories from the 1994 Northridge and 2008 Wenchuan earthquakes reveal that shaking intensity, topography, and distance from active faults and streams are the main controls on the spatial distribution of coseismic landslides. Average fuzzy memberships for each factor are developed and aggregated to model the relative coseismic landslide hazard for both earthquakes. The predictive capabilities of the models are assessed and show good-to-excellent model performance for both events. These memberships are then applied to the 1999 Chi-Chi earthquake, using only a digital elevation model, active fault map, and isoseismal data, replicating prediction of a future event in a region lacking historic inventories and/or geotechnical data. This similarly results in excellent model performance, demonstrating the model's predictive potential and confirming it can be meaningfully applied in regions where previous methods could not. For such regions, this method may enable a greater ability to analyze coseismic landslide hazard from specific earthquake scenarios, allowing for mitigation measures and emergency response plans to be better informed of earthquake-related hazards.
Choudhary, Gaurav; Jankowich, Matthew; Wu, Wen-Chih
2014-07-01
Although elevated pulmonary artery systolic pressure (PASP) is associated with heart failure (HF), whether PASP measurement can help predict future HF admissions is not known, especially in African Americans who are at increased risk for HF. We hypothesized that elevated PASP is associated with increased risk of HF admission and improves HF prediction in African American population. We conducted a longitudinal analysis using the Jackson Heart Study cohort (n=3125; 32.2% men) with baseline echocardiography-derived PASP and follow-up for HF admissions. Hazard ratio for HF admission was estimated using Cox proportional hazard model adjusted for variables in the Atherosclerosis Risk in Community (ARIC) HF prediction model. During a median follow-up of 3.46 years, 3.42% of the cohort was admitted for HF. Subjects with HF had a higher PASP (35.6±11.4 versus 27.6±6.9 mm Hg; P<0.001). The hazard of HF admission increased with higher baseline PASP (adjusted hazard ratio per 10 mm Hg increase in PASP: 2.03; 95% confidence interval, 1.67-2.48; adjusted hazard ratio for highest [≥33 mm Hg] versus lowest quartile [<24 mm Hg] of PASP: 2.69; 95% confidence interval, 1.43-5.06) and remained significant irrespective of history of HF or preserved/reduced ejection fraction. Addition of PASP to the ARIC model resulted in a significant improvement in model discrimination (area under the curve=0.82 before versus 0.84 after; P=0.03) and improved net reclassification index (11-15%) using PASP as a continuous or dichotomous (cutoff=33 mm Hg) variable. Elevated PASP predicts HF admissions in African Americans and may aid in early identification of at-risk subjects for aggressive risk factor modification. © 2014 American Heart Association, Inc.
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regardingmore » their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.« less
Landslide hazard assessment: recent trends and techniques.
Pardeshi, Sudhakar D; Autade, Sumant E; Pardeshi, Suchitra S
2013-01-01
Landslide hazard assessment is an important step towards landslide hazard and risk management. There are several methods of Landslide Hazard Zonation (LHZ) viz. heuristic, semi quantitative, quantitative, probabilistic and multi-criteria decision making process. However, no one method is accepted universally for effective assessment of landslide hazards. In recent years, several attempts have been made to apply different methods of LHZ and to compare results in order to find the best suited model. This paper presents the review of researches on landslide hazard mapping published in recent years. The advanced multivariate techniques are proved to be effective in spatial prediction of landslides with high degree of accuracy. Physical process based models also perform well in LHZ mapping even in the areas with poor database. Multi-criteria decision making approach also play significant role in determining relative importance of landslide causative factors in slope instability process. Remote Sensing and Geographical Information System (GIS) are powerful tools to assess landslide hazards and are being used extensively in landslide researches since last decade. Aerial photographs and high resolution satellite data are useful in detection, mapping and monitoring landslide processes. GIS based LHZ models helps not only to map and monitor landslides but also to predict future slope failures. The advancements in Geo-spatial technologies have opened the doors for detailed and accurate assessment of landslide hazards.
NASA Astrophysics Data System (ADS)
Gochis, E. E.; Lechner, H. N.; Brill, K. A.; Lerner, G.; Ramos, E.
2014-12-01
Graduate students at Michigan Technological University developed the "Landslides!" activity to engage middle & high school students participating in summer engineering programs in a hands-on exploration of geologic engineering and STEM (Science, Technology, Engineering and Math) principles. The inquiry-based lesson plan is aligned to Next Generation Science Standards and is appropriate for 6th-12th grade classrooms. During the activity students focus on the factors contributing to landslide development and engineering practices used to mitigate hazards of slope stability hazards. Students begin by comparing different soil types and by developing predictions of how sediment type may contribute to differences in slope stability. Working in groups, students then build tabletop hill-slope models from the various materials in order to engage in evidence-based reasoning and test their predictions by adding groundwater until each group's modeled slope fails. Lastly students elaborate on their understanding of landslides by designing 'engineering solutions' to mitigate the hazards observed in each model. Post-evaluations from students demonstrate that they enjoyed the hands-on nature of the activity and the application of engineering principles to mitigate a modeled natural hazard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative chemical hazards in the Scorecard database were found using our models.« less
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
Statistical modeling of landslide hazard using GIS
Peter V. Gorsevski; Randy B. Foltz; Paul E. Gessler; Terrance W. Cundy
2001-01-01
A model for spatial prediction of landslide hazard was applied to a watershed affected by landslide events that occurred during the winter of 1995-96, following heavy rains, and snowmelt. Digital elevation data with 22.86 m x 22.86 m resolution was used for deriving topographic attributes used for modeling. The model is based on the combination of logistic regression...
NASA Astrophysics Data System (ADS)
Eble, M. C.; uslu, B. U.; Wright, L.
2013-12-01
Synthetic tsunamis generated from source regions around the Pacific Basin are analyzed in terms of their relative impact on United States coastal locations.. The region of tsunami origin is as important as the expected magnitude and the predicted inundation for understanding tsunami hazard. The NOAA Center for Tsunami Research has developed high-resolution tsunami models capable of predicting tsunami arrival time and amplitude of waves at each location. These models have been used to conduct tsunami hazard assessments to assess maximum impact and tsunami inundation for use by local communities in education and evacuation map development. Hazard assessment studies conducted for Los Angeles, San Francisco, Crescent City, Hilo, and Apra Harbor are combined with results of tsunami forecast model development at each of seventy-five locations. Complete hazard assessment, identifies every possible tsunami variation from a pre-computed propagation database. Study results indicate that the Eastern Aleutian Islands and Alaska are the most likely regions to produce the largest impact on the West Coast of the United States, while the East Philippines and Mariana trench regions impact Apra Harbor, Guam. Hawaii appears to be impacted equally from South America, Alaska and the Kuril Islands.
A re-evaluation of PETROTOX for predicting acute and chronic toxicity of petroleum substances.
Redman, Aaron D; Parkerton, Thomas F; Leon Paumen, Miriam; Butler, Josh D; Letinski, Daniel J; den Haan, Klass
2017-08-01
The PETROTOX model was developed to perform aquatic hazard assessment of petroleum substances based on substance composition. The model relies on the hydrocarbon block method, which is widely used for conducting petroleum substance risk assessments providing further justification for evaluating model performance. Previous work described this model and provided a preliminary calibration and validation using acute toxicity data for limited petroleum substance. The objective of the present study was to re-evaluate PETROTOX using expanded data covering both acute and chronic toxicity endpoints on invertebrates, algae, and fish for a wider range of petroleum substances. The results indicated that recalibration of 2 model parameters was required, namely, the algal critical target lipid body burden and the log octanol-water partition coefficient (K OW ) limit, used to account for reduced bioavailability of hydrophobic constituents. Acute predictions from the updated model were compared with observed toxicity data and found to generally be within a factor of 3 for algae and invertebrates but overestimated fish toxicity. Chronic predictions were generally within a factor of 5 of empirical data. Furthermore, PETROTOX predicted acute and chronic hazard classifications that were consistent or conservative in 93 and 84% of comparisons, respectively. The PETROTOX model is considered suitable for the purpose of characterizing petroleum substance hazard in substance classification and risk assessments. Environ Toxicol Chem 2017;36:2245-2252. © 2017 SETAC. © 2017 SETAC.
Lack of complete and appropriate human data requires prediction of the hazards for exposed human populations by extrapolation from available animal and in vitro data. Predictive models for the toxicity of chemicals can be constructed by linking kinetic and mode of action data uti...
Improvements on mapping soil liquefaction at a regional scale
NASA Astrophysics Data System (ADS)
Zhu, Jing
Earthquake induced soil liquefaction is an important secondary hazard during earthquakes and can lead to significant damage to infrastructure. Mapping liquefaction hazard is important in both planning for earthquake events and guiding relief efforts by positioning resources once the events have occurred. This dissertation addresses two aspects of liquefaction hazard mapping at a regional scale including 1) predictive liquefaction hazard mapping and 2) post-liquefaction cataloging. First, current predictive hazard liquefaction mapping relies on detailed geologic maps and geotechnical data, which are not always available in at-risk regions. This dissertation improves the predictive liquefaction hazard mapping by the development and validation of geospatial liquefaction models (Chapter 2 and 3) that predict liquefaction extent and are appropriate for global application. The geospatial liquefaction models are developed using logistic regression from a liquefaction database consisting of the data from 27 earthquake events from six countries. The model that performs best over the entire dataset includes peak ground velocity (PGV), VS30, distance to river, distance to coast, and precipitation. The model that performs best over the noncoastal dataset includes PGV, VS30, water table depth, distance to water body, and precipitation. Second, post-earthquake liquefaction cataloging historically relies on field investigation that is often limited by time and expense, and therefore results in limited and incomplete liquefaction inventories. This dissertation improves the post-earthquake cataloging by the development and validation of a remote sensing-based method that can be quickly applied over a broad region after an earthquake and provide a detailed map of liquefaction surface effects (Chapter 4). Our method uses the optical satellite images before and after an earthquake event from the WorldView-2 satellite with 2 m spatial resolution and eight spectral bands. Our method uses the changes of spectral variables that are sensitive to surface moisture and soil characteristics paired with a supervised classification.
Models of volcanic eruption hazards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less
Models of volcanic eruption hazards
NASA Astrophysics Data System (ADS)
Wohletz, K. H.
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
About Using Predictive Models and Tools To Assess Chemicals under TSCA
As part of EPA's effort to promote chemical safety, OPPT provides public access to predictive models and tools which can help inform the public on the hazards and risks of substances and improve chemical management decisions.
Multivariate models for prediction of human skin sensitization hazard.
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2017-03-01
One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Chen, Guangchao; Peijnenburg, Willie; Xiao, Yinlong; Vijver, Martina G
2017-07-12
As listed by the European Chemicals Agency, the three elements in evaluating the hazards of engineered nanomaterials (ENMs) include the integration and evaluation of toxicity data, categorization and labeling of ENMs, and derivation of hazard threshold levels for human health and the environment. Assessing the hazards of ENMs solely based on laboratory tests is time-consuming, resource intensive, and constrained by ethical considerations. The adoption of computational toxicology into this task has recently become a priority. Alternative approaches such as (quantitative) structure-activity relationships ((Q)SAR) and read-across are of significant help in predicting nanotoxicity and filling data gaps, and in classifying the hazards of ENMs to individual species. Thereupon, the species sensitivity distribution (SSD) approach is able to serve the establishment of ENM hazard thresholds sufficiently protecting the ecosystem. This article critically reviews the current knowledge on the development of in silico models in predicting and classifying the hazard of metallic ENMs, and the development of SSDs for metallic ENMs. Further discussion includes the significance of well-curated experimental datasets and the interpretation of toxicity mechanisms of metallic ENMs based on reported models. An outlook is also given on future directions of research in this frontier.
NASA Astrophysics Data System (ADS)
Bindi, D.; Cotton, F.; Kotha, S. R.; Bosse, C.; Stromeyer, D.; Grünthal, G.
2017-09-01
We present a ground motion prediction equation (GMPE) for probabilistic seismic hazard assessments (PSHA) in low-to-moderate seismicity areas, such as Germany. Starting from the NGA-West2 flat-file (Ancheta et al. in Earthquake Spectra 30:989-1005, 2014), we develop a model tailored to the hazard application in terms of data selection and implemented functional form. In light of such hazard application, the GMPE is derived for hypocentral distance (along with the Joyner-Boore one), selecting recordings at sites with vs30 ≥ 360 m/s, distances within 300 km, and magnitudes in the range 3 to 8 (being 7.4 the maximum magnitude for the PSHA in the target area). Moreover, the complexity of the considered functional form is reflecting the availability of information in the target area. The median predictions are compared with those from the NGA-West2 models and with one recent European model, using the Sammon's map constructed for different scenarios. Despite the simplification in the functional form, the assessed epistemic uncertainty in the GMPE median is of the order of those affecting the NGA-West2 models for the magnitude range of interest of the hazard application. On the other hand, the simplification of the functional form led to an increment of the apparent aleatory variability. In conclusion, the GMPE developed in this study is tailored to the needs for applications in low-to-moderate seismic areas and for short return periods (e.g., 475 years); its application in studies where the hazard is involving magnitudes above 7.4 and for long return periods is not advised.
LAV@HAZARD: a Web-GIS Framework for Real-Time Forecasting of Lava Flow Hazards
NASA Astrophysics Data System (ADS)
Del Negro, C.; Bilotta, G.; Cappello, A.; Ganci, G.; Herault, A.
2014-12-01
Crucial to lava flow hazard assessment is the development of tools for real-time prediction of flow paths, flow advance rates, and final flow lengths. Accurate prediction of flow paths and advance rates requires not only rapid assessment of eruption conditions (especially effusion rate) but also improved models of lava flow emplacement. Here we present the LAV@HAZARD web-GIS framework, which combines spaceborne remote sensing techniques and numerical simulations for real-time forecasting of lava flow hazards. By using satellite-derived discharge rates to drive a lava flow emplacement model, LAV@HAZARD allows timely definition of parameters and maps essential for hazard assessment, including the propagation time of lava flows and the maximum run-out distance. We take advantage of the flexibility of the HOTSAT thermal monitoring system to process satellite images coming from sensors with different spatial, temporal and spectral resolutions. HOTSAT was designed to ingest infrared satellite data acquired by the MODIS and SEVIRI sensors to output hot spot location, lava thermal flux and discharge rate. We use LAV@HAZARD to merge this output with the MAGFLOW physics-based model to simulate lava flow paths and to update, in a timely manner, flow simulations. Thus, any significant changes in lava discharge rate are included in the predictions. A significant benefit in terms of computational speed was obtained thanks to the parallel implementation of MAGFLOW on graphic processing units (GPUs). All this useful information has been gathered into the LAV@HAZARD platform which, due to the high degree of interactivity, allows generation of easily readable maps and a fast way to explore alternative scenarios. We will describe and demonstrate the operation of this framework using a variety of case studies pertaining to Mt Etna, Sicily. Although this study was conducted on Mt Etna, the approach used is designed to be applicable to other volcanic areas around the world.
Zhang, Taolin; Zhou, Xiaodong; Yang, Lizhong
2016-03-05
This work investigated experimentally and theoretically the fire hazards of thermal-insulation materials used in diesel locomotives under different radiation heat fluxes. Based on the experimental results, the critical heat flux for ignition was determined to be 6.15 kW/m² and 16.39 kW/m² for pure polyurethane and aluminum-polyurethane respectively. A theoretical model was established for both to predict the fire behaviors under different circumstances. The fire behavior of the materials was evaluated based on the flashover and the total heat release rate (HRR). The fire hazards levels were classified based on different experimental results. It was found that the fire resistance performance of aluminum-polyurethane is much better than that of pure-polyurethane under various external heat fluxes. The concentration of toxic pyrolysis volatiles generated from aluminum-polyurethane materials is much higher than that of pure polyurethane materials, especially when the heat flux is below 50 kW/m². The hazard index HI during peak width time was proposed based on the comprehensive impact of time and concentrations. The predicted HI in this model coincides with the existed N-gas and FED models which are generally used to evaluate the fire gas hazard in previous researches. The integrated model named HNF was proposed as well to estimate the fire hazards of materials by interpolation and weighted average calculation.
Zhang, Taolin; Zhou, Xiaodong; Yang, Lizhong
2016-01-01
This work investigated experimentally and theoretically the fire hazards of thermal-insulation materials used in diesel locomotives under different radiation heat fluxes. Based on the experimental results, the critical heat flux for ignition was determined to be 6.15 kW/m2 and 16.39 kW/m2 for pure polyurethane and aluminum-polyurethane respectively. A theoretical model was established for both to predict the fire behaviors under different circumstances. The fire behavior of the materials was evaluated based on the flashover and the total heat release rate (HRR). The fire hazards levels were classified based on different experimental results. It was found that the fire resistance performance of aluminum-polyurethane is much better than that of pure-polyurethane under various external heat fluxes. The concentration of toxic pyrolysis volatiles generated from aluminum-polyurethane materials is much higher than that of pure polyurethane materials, especially when the heat flux is below 50 kW/m2. The hazard index HI during peak width time was proposed based on the comprehensive impact of time and concentrations. The predicted HI in this model coincides with the existed N-gas and FED models which are generally used to evaluate the fire gas hazard in previous researches. The integrated model named HNF was proposed as well to estimate the fire hazards of materials by interpolation and weighted average calculation. PMID:28773295
A multidimensional stability model for predicting shallow landslide size and shape across landscapes
David G. Milledge; Dino Bellugi; Jim A. McKean; Alexander L. Densmore; William E. Dietrich
2014-01-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but...
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Seibert, Tyler M; Fan, Chun Chieh; Wang, Yunpeng; Zuber, Verena; Karunamuni, Roshan; Parsons, J Kellogg; Eeles, Rosalind A; Easton, Douglas F; Kote-Jarai, ZSofia; Al Olama, Ali Amin; Garcia, Sara Benlloch; Muir, Kenneth; Grönberg, Henrik; Wiklund, Fredrik; Aly, Markus; Schleutker, Johanna; Sipeky, Csilla; Tammela, Teuvo Lj; Nordestgaard, Børge G; Nielsen, Sune F; Weischer, Maren; Bisbjerg, Rasmus; Røder, M Andreas; Iversen, Peter; Key, Tim J; Travis, Ruth C; Neal, David E; Donovan, Jenny L; Hamdy, Freddie C; Pharoah, Paul; Pashayan, Nora; Khaw, Kay-Tee; Maier, Christiane; Vogel, Walther; Luedeke, Manuel; Herkommer, Kathleen; Kibel, Adam S; Cybulski, Cezary; Wokolorczyk, Dominika; Kluzniak, Wojciech; Cannon-Albright, Lisa; Brenner, Hermann; Cuk, Katarina; Saum, Kai-Uwe; Park, Jong Y; Sellers, Thomas A; Slavov, Chavdar; Kaneva, Radka; Mitev, Vanio; Batra, Jyotsna; Clements, Judith A; Spurdle, Amanda; Teixeira, Manuel R; Paulo, Paula; Maia, Sofia; Pandha, Hardev; Michael, Agnieszka; Kierzek, Andrzej; Karow, David S; Mills, Ian G; Andreassen, Ole A; Dale, Anders M
2018-01-10
To develop and validate a genetic tool to predict age of onset of aggressive prostate cancer (PCa) and to guide decisions of who to screen and at what age. Analysis of genotype, PCa status, and age to select single nucleotide polymorphisms (SNPs) associated with diagnosis. These polymorphisms were incorporated into a survival analysis to estimate their effects on age at diagnosis of aggressive PCa (that is, not eligible for surveillance according to National Comprehensive Cancer Network guidelines; any of Gleason score ≥7, stage T3-T4, PSA (prostate specific antigen) concentration ≥10 ng/L, nodal metastasis, distant metastasis). The resulting polygenic hazard score is an assessment of individual genetic risk. The final model was applied to an independent dataset containing genotype and PSA screening data. The hazard score was calculated for these men to test prediction of survival free from PCa. Multiple institutions that were members of international PRACTICAL consortium. All consortium participants of European ancestry with known age, PCa status, and quality assured custom (iCOGS) array genotype data. The development dataset comprised 31 747 men; the validation dataset comprised 6411 men. Prediction with hazard score of age of onset of aggressive cancer in validation set. In the independent validation set, the hazard score calculated from 54 single nucleotide polymorphisms was a highly significant predictor of age at diagnosis of aggressive cancer (z=11.2, P<10 -16 ). When men in the validation set with high scores (>98th centile) were compared with those with average scores (30th-70th centile), the hazard ratio for aggressive cancer was 2.9 (95% confidence interval 2.4 to 3.4). Inclusion of family history in a combined model did not improve prediction of onset of aggressive PCa (P=0.59), and polygenic hazard score performance remained high when family history was accounted for. Additionally, the positive predictive value of PSA screening for aggressive PCa was increased with increasing polygenic hazard score. Polygenic hazard scores can be used for personalised genetic risk estimates that can predict for age at onset of aggressive PCa. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Blanton, Brian; Dresback, Kendra; Colle, Brian; Kolar, Randy; Vergara, Humberto; Hong, Yang; Leonardo, Nicholas; Davidson, Rachel; Nozick, Linda; Wachtendorf, Tricia
2018-04-25
Hurricane track and intensity can change rapidly in unexpected ways, thus making predictions of hurricanes and related hazards uncertain. This inherent uncertainty often translates into suboptimal decision-making outcomes, such as unnecessary evacuation. Representing this uncertainty is thus critical in evacuation planning and related activities. We describe a physics-based hazard modeling approach that (1) dynamically accounts for the physical interactions among hazard components and (2) captures hurricane evolution uncertainty using an ensemble method. This loosely coupled model system provides a framework for probabilistic water inundation and wind speed levels for a new, risk-based approach to evacuation modeling, described in a companion article in this issue. It combines the Weather Research and Forecasting (WRF) meteorological model, the Coupled Routing and Excess STorage (CREST) hydrologic model, and the ADvanced CIRCulation (ADCIRC) storm surge, tide, and wind-wave model to compute inundation levels and wind speeds for an ensemble of hurricane predictions. Perturbations to WRF's initial and boundary conditions and different model physics/parameterizations generate an ensemble of storm solutions, which are then used to drive the coupled hydrologic + hydrodynamic models. Hurricane Isabel (2003) is used as a case study to illustrate the ensemble-based approach. The inundation, river runoff, and wind hazard results are strongly dependent on the accuracy of the mesoscale meteorological simulations, which improves with decreasing lead time to hurricane landfall. The ensemble envelope brackets the observed behavior while providing "best-case" and "worst-case" scenarios for the subsequent risk-based evacuation model. © 2018 Society for Risk Analysis.
How well can we test probabilistic seismic hazard maps?
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Stein, Seth; Camelbeeck, Thierry; Vleminckx, Bart
2017-04-01
Recent large earthquakes that gave rise to shaking much stronger than shown in probabilistic seismic hazard (PSH) maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSH model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating shaking histories for an area with assumed uniform distribution of earthquakes, Gutenberg-Richter magnitude-frequency relation, Poisson temporal occurrence model, and ground-motion prediction equation (GMPE). We compare the maximum simulated shaking at many sites over time with that predicted by a hazard map generated for the same set of parameters. The Poisson model predicts that the fraction of sites at which shaking will exceed that of the hazard map is p = 1 - exp(-t/T), where t is the duration of observations and T is the map's return period. Exceedance is typically associated with infrequent large earthquakes, as observed in real cases. The ensemble of simulated earthquake histories yields distributions of fractional exceedance with mean equal to the predicted value. Hence, the PSH algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. However, simulated fractional exceedances show a large scatter about the mean value that decreases with increasing t/T, increasing observation time and increasing Gutenberg-Richter a-value (combining intrinsic activity rate and surface area), but is independent of GMPE uncertainty. This scatter is due to the variability of earthquake recurrence, and so decreases as the largest earthquakes occur in more simulations. Our results are important for evaluating the performance of a hazard map based on misfits in fractional exceedance, and for assessing whether such misfit arises by chance or reflects a bias in the map. More specifically, we determined for a broad range of Gutenberg-Richter a-values theoretical confidence intervals on allowed misfits in fractional exceedance and on the percentage of hazard-map bias that can thus be detected by comparison with observed shaking histories. Given that in the real world we only have one shaking history for an area, these results indicate that even if a hazard map does not fit the observations, it is very difficult to assess its veracity, especially for low-to-moderate-seismicity regions. Because our model is a simplified version of reality, any additional uncertainty or complexity will tend to widen these confidence intervals.
Is Directivity Still Effective in a PSHA Framework?
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2008-12-01
Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.
Wang, S; Sun, Z; Wang, S
1996-11-01
A prospective follow-up study of 539 advanced gastric carcinoma patients after resection was undertaken between 1 January 1980 and 31 December 1989, with a follow-up rate of 95.36%. A multivariate analysis of possible factors influencing survival of these patients was performed, and their predicting models of survival rates was established by Cox proportional hazard model. The results showed that the major significant prognostic factors influencing survival of these patients were rate and station of lymph node metastases, type of operation, hepatic metastases, size of tumor, age and location of tumor. The most important factor was the rate of lymph node metastases. According to their regression coefficients, the predicting value (PV) of each patient was calculated, then all patients were divided into five risk groups according to PV, their predicting models of survival rates after resection were established in groups. The goodness-fit of estimated predicting models of survival rates were checked by fitting curve and residual plot, and the estimated models tallied with the actual situation. The results suggest that the patients with advanced gastric cancer after resection without lymph node metastases and hepatic metastases had a better prognosis, and their survival probability may be predicted according to the predicting model of survival rates.
Current status and future needs of the BehavePlus Fire Modeling System
Patricia L. Andrews
2014-01-01
The BehavePlus Fire Modeling System is among the most widely used systems for wildland fire prediction. It is designed for use in a range of tasks including wildfire behaviour prediction, prescribed fire planning, fire investigation, fuel hazard assessment, fire model understanding, communication and research. BehavePlus is based on mathematical models for fire...
Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia
2017-04-01
Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.
Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA)
EPA has developed databases and predictive models to help evaluate the hazard, exposure, and risk of chemicals released to the environment and how workers, the general public, and the environment may be exposed to and affected by them.
A general mathematical model is developed to predict emissions of volatile organic compounds (VOCs) from hazardous or sanitary landfills. The model is analytical in nature and includes important mechanisms occurring in unsaturated subsurface landfill environme...
Multi-hazard risk analysis related to hurricanes
NASA Astrophysics Data System (ADS)
Lin, Ning
Hurricanes present major hazards to the United States. Associated with extreme winds, heavy rainfall, and storm surge, landfalling hurricanes often cause enormous structural damage to coastal regions. Hurricane damage risk assessment provides the basis for loss mitigation and related policy-making. Current hurricane risk models, however, often oversimplify the complex processes of hurricane damage. This dissertation aims to improve existing hurricane risk assessment methodology by coherently modeling the spatial-temporal processes of storm landfall, hazards, and damage. Numerical modeling technologies are used to investigate the multiplicity of hazards associated with landfalling hurricanes. The application and effectiveness of current weather forecasting technologies to predict hurricane hazards is investigated. In particular, the Weather Research and Forecasting model (WRF), with Geophysical Fluid Dynamics Laboratory (GFDL)'s hurricane initialization scheme, is applied to the simulation of the wind and rainfall environment during hurricane landfall. The WRF model is further coupled with the Advanced Circulation (AD-CIRC) model to simulate storm surge in coastal regions. A case study examines the multiple hazards associated with Hurricane Isabel (2003). Also, a risk assessment methodology is developed to estimate the probability distribution of hurricane storm surge heights along the coast, particularly for data-scarce regions, such as New York City. This methodology makes use of relatively simple models, specifically a statistical/deterministic hurricane model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, to simulate large numbers of synthetic surge events, and conducts statistical analysis. The estimation of hurricane landfall probability and hazards are combined with structural vulnerability models to estimate hurricane damage risk. Wind-induced damage mechanisms are extensively studied. An innovative windborne debris risk model is developed based on the theory of Poisson random measure, substantiated by a large amount of empirical data. An advanced vulnerability assessment methodology is then developed, by integrating this debris risk model and a component-based pressure damage model, to predict storm-specific or annual damage to coastal residential neighborhoods. The uniqueness of this vulnerability model lies in its detailed description of the interaction between wind pressure and windborne debris effects over periods of strong winds, which is a major mechanism leading to structural failures during hurricanes.
ABM and GIS-based multi-scenarios volcanic evacuation modelling of Merapi
NASA Astrophysics Data System (ADS)
Jumadi, Carver, Steve; Quincey, Duncan
2016-05-01
Conducting effective evacuation is one of the successful keys to deal with such crisis. Therefore, a plan that considers the probability of the spatial extent of the hazard occurrences is needed. Likewise, the evacuation plan in Merapi is already prepared before the eruption on 2010. However, the plan could not be performed because the eruption magnitude was bigger than it was predicted. In this condition, the extent of the hazardous area was increased larger than the prepared hazard model. Managing such unpredicted situation need adequate information that flexible and adaptable to the current situation. Therefore, we applied an Agent-based Model (ABM) and Geographic Information System (GIS) using multi-scenarios hazard model to support the evacuation management. The methodology and the case study in Merapi is provided.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Predictive Models of target organ and Systemic toxicities (BOSC)
The objective of this work is to predict the hazard classification and point of departure (PoD) of untested chemicals in repeat-dose animal testing studies. We used supervised machine learning to objectively evaluate the predictive accuracy of different classification and regress...
1990-12-31
health hazards from weapons combustion products, to include rockets and missiles, became evident, Research to elucidate significant health effects of...CO/CO2 ratios was low for all but one of dhe formulations, In general, if the model were to be used in its present state for health risk assessments...35 Part 2: Modeling for Health Hazard Prediction Introduction ................................................. 37 Results and D iscussion
Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia
2017-01-01
Background Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. Aim To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Design and setting Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Method Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. Results From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The ‘predictAL-10’ risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the ‘predictAL-9’), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. Conclusion The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. PMID:28360074
NASA Astrophysics Data System (ADS)
Ji, Zhonghui; Li, Ning; Wu, Xianhua
2017-08-01
Based on the related impact factors of precipitation anomaly referred in previous research, eight atmospheric circulation indicators in pre-winter and spring picked out by correlation analysis as the independent variables and the hazard levels of drought/flood sudden alternation index (DFSAI) as the dependent variables were used to construct the nonlinear and nonparametric classification and regression tree (CART) for the threshold determination and hazard evaluation on bimonthly and monthly scales in Huaihe River basin. Results show that the spring indicators about Arctic oscillation index (AOI_S), Asia polar vortex area index (APVAI_S), and Asian meridional circulation index (AMCI_S) were extracted as the three main impact factors, which were proved to be suitable for the hazard levels assessment of the drought/flood sudden alternation (DFSA) disaster based on bimonthly scale. On monthly scale, AOI_S, northern hemisphere polar vortex intensity index in pre-winter (NHPVII_PW), and AMCI_S are the three primary variables in hazard level prediction of DFSA in May and June; NHPVII_PW, AMCI_PW, and AMCI_S are for that in June and July; NHPVII_PW and EASMI are for that in July and August. The type of the disaster (flood to drought/drought to flood/no DFSA) and hazard level under different conditions also can be obtained from each model. The hazard level and type were expressed by the integer from - 3 to 3, which change from the high level of disaster that flood to drought (level - 3) to the high level of the reverse type (level 3). The middle number 0 represents no DFSA. The high levels of the two sides decrease progressively to the neutralization (level 0). When AOI_S less than - 0.355, the disaster of the quick turn from drought to flood is more apt to happen (level 1) on bimonthly scale; when AOI_S less than - 1.32, the same type disaster may occur (level 2) in May and June on monthly scale. When NHPVII_PW less than 341.5, the disaster of the quick turn from flood to drought will occur (level - 1) in June and July on monthly scale. By this analogy, different hazard types and levels all can be judged from the optimal models. The corresponding data from 2011 to 2015 were selected to verify the final models through the comparison between the predicted and actual levels, and the models of M1 (bimonthly scale), M2, and M3 (monthly scale) were proved to be acceptable by the prediction accuracy rate (compared the predicted with the observed levels, 73%, 11/15). The proposed CART method in this research is a new try for the short-term climate prediction.
Development of Algal Interspecies Correlation Estimation Models for Chemical Hazard Assessment
Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potent...
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.
Effect and clinical prediction of worsening renal function in acute decompensated heart failure.
Breidthardt, Tobias; Socrates, Thenral; Noveanu, Markus; Klima, Theresia; Heinisch, Corinna; Reichlin, Tobias; Potocki, Mihael; Nowak, Albina; Tschung, Christopher; Arenja, Nisha; Bingisser, Roland; Mueller, Christian
2011-03-01
We aimed to establish the prevalence and effect of worsening renal function (WRF) on survival among patients with acute decompensated heart failure. Furthermore, we sought to establish a risk score for the prediction of WRF and externally validate the previously established Forman risk score. A total of 657 consecutive patients with acute decompensated heart failure presenting to the emergency department and undergoing serial creatinine measurements were enrolled. The potential of the clinical parameters at admission to predict WRF was assessed as the primary end point. The secondary end point was all-cause mortality at 360 days. Of the 657 patients, 136 (21%) developed WRF, and 220 patients had died during the first year. WRF was more common in the nonsurvivors (30% vs 41%, p = 0.03). Multivariate regression analysis found WRF to independently predict mortality (hazard ratio 1.92, p <0.01). In a single parameter model, previously diagnosed chronic kidney disease was the only independent predictor of WRF and achieved an area under the receiver operating characteristic curve of 0.60. After the inclusion of the blood gas analysis parameters into the model history of chronic kidney disease (hazard ratio 2.13, p = 0.03), outpatient diuretics (hazard ratio 5.75, p <0.01), and bicarbonate (hazard ratio 0.91, p <0.01) were all predictive of WRF. A risk score was developed using these predictors. On receiver operating characteristic curve analysis, the Forman and Basel prediction rules achieved an area under the curve of 0.65 and 0.71, respectively. In conclusion, WRF was common in patients with acute decompensated heart failure and was linked to significantly worse outcomes. However, the clinical parameters failed to adequately predict its occurrence, making a tailored therapy approach impossible. Copyright © 2011 Elsevier Inc. All rights reserved.
Empirical Data Fusion for Convective Weather Hazard Nowcasting
NASA Astrophysics Data System (ADS)
Williams, J.; Ahijevych, D.; Steiner, M.; Dettling, S.
2009-09-01
This paper describes a statistical analysis approach to developing an automated convective weather hazard nowcast system suitable for use by aviation users in strategic route planning and air traffic management. The analysis makes use of numerical weather prediction model fields and radar, satellite, and lightning observations and derived features along with observed thunderstorm evolution data, which are aligned using radar-derived motion vectors. Using a dataset collected during the summers of 2007 and 2008 over the eastern U.S., the predictive contributions of the various potential predictor fields are analyzed for various spatial scales, lead-times and scenarios using a technique called random forests (RFs). A minimal, skillful set of predictors is selected for each scenario requiring distinct forecast logic, and RFs are used to construct an empirical probabilistic model for each. The resulting data fusion system, which ran in real-time at the National Center for Atmospheric Research during the summer of 2009, produces probabilistic and deterministic nowcasts of the convective weather hazard and assessments of the prediction uncertainty. The nowcasts' performance and results for several case studies are presented to demonstrate the value of this approach. This research has been funded by the U.S. Federal Aviation Administration to support the development of the Consolidated Storm Prediction for Aviation (CoSPA) system, which is intended to provide convective hazard nowcasts and forecasts for the U.S. Next Generation Air Transportation System (NextGen).
Environmental Risk Assessment Strategy for Nanomaterials.
Scott-Fordsmand, Janeck J; Peijnenburg, Willie J G M; Semenzin, Elena; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G; Bos, Peter M J; Hund-Rinke, Kerstin
2017-10-19
An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models.
Environmental Risk Assessment Strategy for Nanomaterials
Scott-Fordsmand, Janeck J.; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G.; Bos, Peter M. J.
2017-01-01
An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models. PMID:29048395
Vandermoere, Frédéric
2008-04-01
This case study examines the hazard and risk perception and the need for decontamination according to people exposed to soil pollution. Using an ecological-symbolic approach (ESA), a multidisciplinary model is developed that draws upon psychological and sociological perspectives on risk perception and includes ecological variables by using data from experts' risk assessments. The results show that hazard perception is best predicted by objective knowledge, subjective knowledge, estimated knowledge of experts, and the assessed risks. However, experts' risk assessments induce an increase in hazard perception only when residents know the urgency of decontamination. Risk perception is best predicted by trust in the risk management. Additionally, need for decontamination relates to hazard perception, risk perception, estimated knowledge of experts, and thoughts about sustainability. In contrast to the knowledge deficit model, objective and subjective knowledge did not significantly relate to risk perception and need for decontamination. The results suggest that residents can make a distinction between hazards in terms of the seriousness of contamination on the one hand, and human health risks on the other hand. Moreover, next to the importance of social determinants of environmental risk perception, this study shows that the output of experts' risk assessments-or the objective risks-can create a hazard awareness rather than an alarming risk consciousness, despite residents' distrust of scientific knowledge.
Different regulatory schemes worldwide, and in particular the preparation for the new REACH legislation in Europe, increase the reliance on estimation methods for predicting potential chemical hazard.
Crabbe, Helen; Fletcher, Tony; Close, Rebecca; Watts, Michael J; Ander, E Louise; Smedley, Pauline L; Verlander, Neville Q; Gregory, Martin; Middleton, Daniel R S; Polya, David A; Studden, Mike; Leonardi, Giovanni S
2017-12-01
Approximately one million people in the UK are served by private water supplies (PWS) where main municipal water supply system connection is not practical or where PWS is the preferred option. Chronic exposure to contaminants in PWS may have adverse effects on health. South West England is an area with elevated arsenic concentrations in groundwater and over 9000 domestic dwellings here are supplied by PWS. There remains uncertainty as to the extent of the population exposed to arsenic (As), and the factors predicting such exposure. We describe a hazard assessment model based on simplified geology with the potential to predict exposure to As in PWS. Households with a recorded PWS in Cornwall were recruited to take part in a water sampling programme from 2011 to 2013. Bedrock geologies were aggregated and classified into nine Simplified Bedrock Geological Categories (SBGC), plus a cross-cutting "mineralized" area. PWS were sampled by random selection within SBGCs and some 508 households volunteered for the study. Transformations of the data were explored to estimate the distribution of As concentrations for PWS by SBGC. Using the distribution per SBGC, we predict the proportion of dwellings that would be affected by high concentrations and rank the geologies according to hazard. Within most SBGCs, As concentrations were found to have log-normal distributions. Across these areas, the proportion of dwellings predicted to have drinking water over the prescribed concentration value (PCV) for As ranged from 0% to 20%. From these results, a pilot predictive model was developed calculating the proportion of PWS above the PCV for As and hazard ranking supports local decision making and prioritization. With further development and testing, this can help local authorities predict the number of dwellings that might fail the PCV for As, based on bedrock geology. The model presented here for Cornwall could be applied in areas with similar geologies. Application of the method requires independent validation and further groundwater-derived PWS sampling on other geological formations.
Lin Receives 2010 Natural Hazards Focus Group Award for Graduate Research
NASA Astrophysics Data System (ADS)
2010-11-01
Ning Lin has been awarded the Natural Hazards Focus Group Award for Graduate Research, given annually to a recent Ph.D. recipient for outstanding contributions to natural hazards research. Lin's thesis is entitled “Multi-hazard risk analysis related to hurricanes.” She is scheduled to present an invited talk in the Extreme Natural Events: Modeling, Prediction, and Mitigation session (NH20) during the 2010 AGU Fall Meeting, held 13-17 December in San Francisco, Calif. Lin will be formally presented with the award at the Natural Hazards focus group reception on 14 December 2010.
Wang, Jye; Lin, Wender; Chang, Ling-Hui
2018-01-01
The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.
A Model-Based Prioritisation Exercise for the European Water Framework Directive
Daginnus, Klaus; Gottardo, Stefania; Payá-Pérez, Ana; Whitehouse, Paul; Wilkinson, Helen; Zaldívar, José-Manuel
2011-01-01
A model-based prioritisation exercise has been carried out for the Water Framework Directive (WFD) implementation. The approach considers two aspects: the hazard of a certain chemical and its exposure levels, and focuses on aquatic ecosystems, but also takes into account hazards due to secondary poisoning, bioaccumulation through the food chain and potential human health effects. A list provided by EU Member States, Stakeholders and Non-Governmental Organizations comprising 2,034 substances was evaluated according to hazard and exposure criteria. Then 78 substances classified as “of high concern” where analysed and ranked in terms of risk ratio (Predicted Environmental Concentration/Predicted No-Effect Concentration). This exercise has been complemented by a monitoring-based prioritization exercise using data provided by Member States. The proposed approach constitutes the first step in setting the basis for an open modular screening tool that could be used for the next prioritization exercises foreseen by the WFD. PMID:21556195
Complete hazard ranking to analyze right-censored data: An ALS survival study.
Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang
2017-12-01
Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.
Volcanic ash melting under conditions relevant to ash turbine interactions.
Song, Wenjia; Lavallée, Yan; Hess, Kai-Uwe; Kueppers, Ulrich; Cimarelli, Corrado; Dingwell, Donald B
2016-03-02
The ingestion of volcanic ash by jet engines is widely recognized as a potentially fatal hazard for aircraft operation. The high temperatures (1,200-2,000 °C) typical of jet engines exacerbate the impact of ash by provoking its melting and sticking to turbine parts. Estimation of this potential hazard is complicated by the fact that chemical composition, which affects the temperature at which volcanic ash becomes liquid, can vary widely amongst volcanoes. Here, based on experiments, we parameterize ash behaviour and develop a model to predict melting and sticking conditions for its global compositional range. The results of our experiments confirm that the common use of sand or dust proxy is wholly inadequate for the prediction of the behaviour of volcanic ash, leading to overestimates of sticking temperature and thus severe underestimates of the thermal hazard. Our model can be used to assess the deposition probability of volcanic ash in jet engines.
Rubio-Tapia, Alberto; Malamut, Georgia; Verbeek, Wieke H.M.; van Wanrooij, Roy L.J.; Leffler, Daniel A.; Niveloni, Sonia I.; Arguelles-Grande, Carolina; Lahr, Brian D.; Zinsmeister, Alan R.; Murray, Joseph A.; Kelly, Ciaran P.; Bai, Julio C.; Green, Peter H.; Daum, Severin; Mulder, Chris J.J.; Cellier, Christophe
2016-01-01
Background Refractory coeliac disease is a severe complication of coeliac disease with heterogeneous outcome. Aim To create a prognostic model to estimate survival of patients with refractory coeliac disease. Methods We evaluated predictors of 5-year mortality using Cox proportional hazards regression on subjects from a multinational registry. Bootstrap re-sampling was used to internally validate the individual factors and overall model performance. The mean of the estimated regression coefficients from 400 bootstrap models was used to derive a risk score for 5-year mortality. Results The multinational cohort was composed of 232 patients diagnosed with refractory coeliac disease across 7 centers (range of 11–63 cases per center). The median age was 53 years and 150 (64%) were women. A total of 51 subjects died during 5-year follow-up (cumulative 5-year all-cause mortality = 30%). From a multiple variable Cox proportional hazards model, the following variables were significantly associated with 5-year mortality: age at refractory coeliac disease diagnosis (per 20 year increase, hazard ratio = 2.21; 95% confidence interval: 1.38, 3.55), abnormal intraepithelial lymphocytes (hazard ratio = 2.85; 95% confidence interval: 1.22, 6.62), and albumin (per 0.5 unit increase, hazard ratio = 0.72; 95% confidence interval: 0.61, 0.85). A simple weighted 3-factor risk score was created to estimate 5-year survival. Conclusions Using data from a multinational registry and previously-reported risk factors, we create a prognostic model to predict 5-year mortality among patients with refractory coeliac disease. This new model may help clinicians to guide treatment and follow-up. PMID:27485029
Rubio-Tapia, A; Malamut, G; Verbeek, W H M; van Wanrooij, R L J; Leffler, D A; Niveloni, S I; Arguelles-Grande, C; Lahr, B D; Zinsmeister, A R; Murray, J A; Kelly, C P; Bai, J C; Green, P H; Daum, S; Mulder, C J J; Cellier, C
2016-10-01
Refractory coeliac disease is a severe complication of coeliac disease with heterogeneous outcome. To create a prognostic model to estimate survival of patients with refractory coeliac disease. We evaluated predictors of 5-year mortality using Cox proportional hazards regression on subjects from a multinational registry. Bootstrap resampling was used to internally validate the individual factors and overall model performance. The mean of the estimated regression coefficients from 400 bootstrap models was used to derive a risk score for 5-year mortality. The multinational cohort was composed of 232 patients diagnosed with refractory coeliac disease across seven centres (range of 11-63 cases per centre). The median age was 53 years and 150 (64%) were women. A total of 51 subjects died during a 5-year follow-up (cumulative 5-year all-cause mortality = 30%). From a multiple variable Cox proportional hazards model, the following variables were significantly associated with 5-year mortality: age at refractory coeliac disease diagnosis (per 20 year increase, hazard ratio = 2.21; 95% confidence interval, CI: 1.38-3.55), abnormal intraepithelial lymphocytes (hazard ratio = 2.85; 95% CI: 1.22-6.62), and albumin (per 0.5 unit increase, hazard ratio = 0.72; 95% CI: 0.61-0.85). A simple weighted three-factor risk score was created to estimate 5-year survival. Using data from a multinational registry and previously reported risk factors, we create a prognostic model to predict 5-year mortality among patients with refractory coeliac disease. This new model may help clinicians to guide treatment and follow-up. © 2016 John Wiley & Sons Ltd.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na
2016-11-23
A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P < 0.001). Self-verification showed that 83.8 % of the participants were correctly predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.
2017-01-01
Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773
Landslide modeling and forecasting—recent progress by the u.s. geological survey
Baum, Rex L.; Kean, Jason W.
2015-01-01
Landslide studies by the U.S. Geological Survey (USGS) are focused on two main objectives: scientific understanding and forecasting. The first objective is to gain better understanding of the physical processes involved in landslide initiation and movement. This objective is largely in support of the second objective, to develop predictive capabilities to answer the main hazard questions. Answers to the following six questions are needed to characterize the hazard from landslides: (1) Where will landslides occur? (2) What kind(s) of landslides will occur? (3) When will landslides occur? (4) How big will the landslides be? (5) How fast will the landslides travel? (6) How far will the landslides go? Although these questions are sometimes recast in different terms, such as frequency or recurrence rather than timing (when), the questions or their variants address the spatial, physical, and temporal aspects of landslide hazards. Efforts to develop modeling and forecasting capabilities by the USGS are primarily focused on specific landslide types that pose a high degree of hazard and show relatively high potential for predictability.
Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
NASA Astrophysics Data System (ADS)
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.
Chen, Jin-hong; Wu, Hai-yun; He, Kun-lun; He, Yao; Qin, Yin-he
2010-10-01
To establish and verify the prediction model for ischemic cardiovascular disease (ICVD) among the elderly population who were under the current health care programs. Statistical analysis on data from physical examination, hospitalization of the past years, from questionnaire and telephone interview was carried out in May, 2003. Data was from a hospital which implementing a health care program. Baseline population with a proportion of 4:1 was randomly selected to generate both module group and verification group. Baseline data was induced to make the verification group into regression model of module group and to generate the predictive value. Distinguished ability with area under ROC curve and the predictive veracity were verified through comparing the predictive incidence rate and actual incidence rate of every deciles group by Hosmer-Lemeshow test. Predictive veracity of the prediction model at population level was verified through comparing the predictive 6-year incidence rates of ICVD with actual 6-year accumulative incidence rates of ICVD with error rate calculated. The samples included 2271 males over the age of 65 with 1817 people for modeling population and 454 for verified population. All of the samples were stratified into two layers to establish hierarchical Cox proportional hazard regression model, including one advanced age group (greater than or equal to 75 years old), and another elderly group (less than 75 years old). Data from the statically analysis showed that the risk factors in aged group were age, systolic blood pressure, serum creatinine level, fasting blood glucose level, while protective factor was high density lipoprotein;in advanced age group, the risk factors were body weight index, systolic blood pressure, serum total cholesterol level, serum creatinine level, fasting blood glucose level, while protective factor was HDL-C. The area under the ROC curve (AUC) and 95%CI were 0.723 and 0.687 - 0.759 respectively. Discriminating power was good. All individual predictive ICVD cumulative incidence and actual incidence were analyzed using Hosmer-Lemeshow test, χ(2) = 1.43, P = 0.786, showing that the predictive veracity was good. The stratified Cox Hazards Regression model was used to establish prediction model of the aged male population under a certain health care program. The common prediction factor of the two age groups were: systolic blood pressure, serum creatinine level, fasting blood glucose level and HDL-C. The area under the ROC curve of the verification group was 0.723, showing that the distinguished ability was good and the predict ability at the individual level and at the group level were also satisfactory. It was feasible to using Cox Proportional Hazards Regression Model for predicting the population groups.
A spectral clustering search algorithm for predicting shallow landslide size and location
Dino Bellugi; David G. Milledge; William E. Dietrich; Jim A. McKean; J. Taylor Perron; Erik B. Sudderth; Brian Kazian
2015-01-01
The potential hazard and geomorphic significance of shallow landslides depend on their location and size. Commonly applied one-dimensional stability models do not include lateral resistances and cannot predict landslide size. Multi-dimensional models must be applied to specific geometries, which are not known a priori, and testing all possible geometries is...
Landslide Hazard from Coupled Inherent and Dynamic Probabilities
NASA Astrophysics Data System (ADS)
Strauch, R. L.; Istanbulluoglu, E.; Nudurupati, S. S.
2015-12-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We sought to unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach couples an empirical inherent landslide probability, based on a frequency ratio analysis, with a numerical dynamic probability, generated by combining subsurface water recharge and surface runoff from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model. Landslide hazard mapping is advanced by combining static and dynamic models of stability into a probabilistic measure of geohazard prediction in both space and time. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex in northern Washington State.
Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2015-01-01
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.
Forecasting extreme temperature health hazards in Europe
NASA Astrophysics Data System (ADS)
Di Napoli, Claudia; Pappenberger, Florian; Cloke, Hannah L.
2017-04-01
Extreme hot temperatures, such as those experienced during a heat wave, represent a dangerous meteorological hazard to human health. Heat disorders such as sunstroke are harmful to people of all ages and responsible for excess mortality in the affected areas. In 2003 more than 50,000 people died in western and southern Europe because of a severe and sustained episode of summer heat [1]. Furthermore, according to the Intergovernmental Panel on Climate Change heat waves are expected to get more frequent in the future thus posing an increasing threat to human lives. Developing appropriate tools for extreme hot temperatures prediction is therefore mandatory to increase public preparedness and mitigate heat-induced impacts. A recent study has shown that forecasts of the Universal Thermal Climate Index (UTCI) provide a valid overview of extreme temperature health hazards on a global scale [2]. UTCI is a parameter related to the temperature of the human body and its regulatory responses to the surrounding atmospheric environment. UTCI is calculated using an advanced thermo-physiological model that includes the human heat budget, physiology and clothing. To forecast UTCI the model uses meteorological inputs, such as 2m air temperature, 2m water vapour pressure and wind velocity at body height derived from 10m wind speed, from NWP models. Here we examine the potential of UTCI as an extreme hot temperature prediction tool for the European area. UTCI forecasts calculated using above-mentioned parameters from ECMWF models are presented. The skill in predicting UTCI for medium lead times is also analysed and discussed for implementation to international health-hazard warning systems. This research is supported by the ANYWHERE project (EnhANcing emergencY management and response to extreme WeatHER and climate Events) which is funded by the European Commission's HORIZON2020 programme. [1] Koppe C. et al., Heat waves: risks and responses. World Health Organization. Health and Global Environmental Change, Series No. 2, Copenhagen, Denmark, 2004. [2] Pappenberger F. et al., Global forecasting of thermal health hazards: the skill of probabilistic predictions of the Universal Thermal Climate Index (UTCI), International Journal of Biometeorology 59(3): 311-323, 2015.
NASA Astrophysics Data System (ADS)
Tang, Xiao-Wei; Bai, Xu; Hu, Ji-Lei; Qiu, Jiang-Nan
2018-05-01
Liquefaction-induced hazards such as sand boils, ground cracks, settlement, and lateral spreading are responsible for considerable damage to engineering structures during major earthquakes. Presently, there is no effective empirical approach that can assess different liquefaction-induced hazards in one model. This is because of the uncertainties and complexity of the factors related to seismic liquefaction and liquefaction-induced hazards. In this study, Bayesian networks (BNs) are used to integrate multiple factors related to seismic liquefaction, sand boils, ground cracks, settlement, and lateral spreading into a model based on standard penetration test data. The constructed BN model can assess four different liquefaction-induced hazards together. In a case study, the BN method outperforms an artificial neural network and Ishihara and Yoshimine's simplified method in terms of accuracy, Brier score, recall, precision, and area under the curve (AUC) of the receiver operating characteristic (ROC). This demonstrates that the BN method is a good alternative tool for the risk assessment of liquefaction-induced hazards. Furthermore, the performance of the BN model in estimating liquefaction-induced hazards in Japan's 2011 Tōhoku earthquake confirms its correctness and reliability compared with the liquefaction potential index approach. The proposed BN model can also predict whether the soil becomes liquefied after an earthquake and can deduce the chain reaction process of liquefaction-induced hazards and perform backward reasoning. The assessment results from the proposed model provide informative guidelines for decision-makers to detect the damage state of a field following liquefaction.
RiskScape Volcano: Development of a risk assessment tool for volcanic hazards
NASA Astrophysics Data System (ADS)
Deligne, Natalia; King, Andrew; Jolly, Gill; Wilson, Grant; Wilson, Tom; Lindsay, Jan
2013-04-01
RiskScape is a multi-hazard risk assessment tool developed by GNS Science and the National Institute of Water and Atmospheric Research Ltd. (NIWA) in New Zealand that models the risk and impact of various natural hazards on a given built environment. RiskScape has a modular structure: the hazard module models hazard exposure (e.g., ash thickness at a given location), the asset module catalogues assets (built environment, infrastructure, and people) and their attributes exposed to the hazard, and the vulnerability module models the consequences of asset exposure to the hazard. Hazards presently included in RiskScape are earthquakes, river floods, tsunamis, windstorms, and ash from volcanic eruptions (specifically from Ruapehu). Here we present our framework for incorporating other volcanic hazards (e.g., pyroclastic density currents, lava flows, lahars, ground deformation) into RiskScape along with our approach for assessing asset vulnerability. We also will discuss the challenges of evaluating risk for 'point source' (e.g., stratovolcanoes) vs 'diffuse' (e.g., volcanic fields) volcanism using Ruapehu and the Auckland volcanic field as examples. Once operational, RiskScape Volcano will be a valuable resource both in New Zealand and internationally as a practical tool for evaluating risk and also as an example for how to predict the consequences of volcanic eruptions on both rural and urban environments.
NASA Astrophysics Data System (ADS)
Sparks, S. R.
2008-12-01
Volcanic eruptions in arcs are complex natural phenomena, involving the movement of magma to the Earth's surface and interactions with the surrounding crust during ascent and with the surface environment during eruption, resulting in secondary hazards. Magma changes its properties profoundly during ascent and eruption and many of the underlying processes of heat and mass transfer and physical property changes that govern volcanic flows and magmatic interactions with the environment are highly non-linear. Major direct hazards include tephra fall, pyroclastic flows from explosions and dome collapse, volcanic blasts, lahars, debris avalanches and tsunamis. There are also health hazards related to emissions of gases and very fine volcanic ash. These hazards and progress in their assessment are illustrated mainly from the ongoing eruption of the Soufriere Hills volcano. Montserrat. There are both epistemic and aleatory uncertainties in the assessment of volcanic hazards, which can be large, making precise prediction a formidable objective. Indeed in certain respects volcanic systems and hazardous phenomena may be intrinsically unpredictable. As with other natural phenomena, predictions and hazards inevitably have to be expressed in probabilistic terms that take account of these uncertainties. Despite these limitations significant progress is being made in the ability to anticipate volcanic activity in volcanic arcs and, in favourable circumstances, make robust hazards assessments and predictions. Improvements in monitoring ground deformation, gas emissions and seismicity are being combined with more advanced models of volcanic flows and their interactions with the environment. In addition more structured and systematic methods for assessing hazards and risk are emerging that allow impartial advice to be given to authorities during volcanic crises. There remain significant issues of how scientific advice and associated uncertainties are communicated to provide effective mitigation during volcanic crises.
Extreme Rock Distributions on Mars and Implications for Landing Safety
NASA Technical Reports Server (NTRS)
Golombek, M. P.
2001-01-01
Prior to the landing of Mars Pathfinder, the size-frequency distribution of rocks from the two Viking landing sites and Earth analog surfaces was used to derive a size-frequency model, for nomimal rock distributions on Mars. This work, coupled with extensive testing of the Pathfinder airbag landing system, allowed an estimate of what total rock abundances derived from thermal differencing techniques could be considered safe for landing. Predictions based on this model proved largely correct at predicting the size-frequency distribution of rocks at the Mars Pathfinder site and the fraction of potentially hazardous rocks. In this abstract, extreme rock distributions observed in Mars Orbiter Camera (MOC) images are compared with those observed at the three landing sites and model distributions as an additional constraint on potentially hazardous surfaces on Mars.
Development and analysis of air quality modeling simulations for hazardous air pollutants
NASA Astrophysics Data System (ADS)
Luecken, D. J.; Hutzell, W. T.; Gipson, G. L.
The concentrations of five hazardous air pollutants were simulated using the community multi-scale air quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results are shown for formaldehyde, acetaldehyde, benzene, 1,3-butadiene and acrolein. Photochemical production in the atmosphere is predicted to dominate ambient formaldehyde and acetaldehyde concentrations, and to account for a significant fraction of ambient acrolein concentrations. Spatial and temporal variations are large throughout the domain over the year. Predicted concentrations are compared with observations for formaldehyde, acetaldehyde, benzene and 1,3-butadiene. Although the modeling results indicate an overall slight tendency towards underprediction, they reproduce episodic and seasonal behavior of pollutant concentrations at many monitors with good skill.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO)
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-01-01
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles. PMID:27420073
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO).
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-07-13
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle's speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.
Rainfall Induced Landslides in Puerto Rico (Invited)
NASA Astrophysics Data System (ADS)
Lepore, C.; Kamal, S.; Arnone, E.; Noto, V.; Shanahan, P.; Bras, R. L.
2009-12-01
Landslides are a major geologic hazard in the United States, typically triggered by rainfall, earthquakes, volcanoes and human activity. Rainfall-induced landslides are the most common type in the island of Puerto Rico, with one or two large events per year. We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model capable of simulating landslides, tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. .Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides, which is used to predict the dynamic susceptibility of the basin to landslides.
Artificialized land characteristics and sediment connectivity explain muddy flood hazard in Wallonia
NASA Astrophysics Data System (ADS)
de Walque, Baptiste; Bielders, Charles; Degré, Aurore; Maugnard, Alexandre
2017-04-01
Muddy flood occurrence is an off-site erosion problem of growing interest in Europe and in particular in the loess belt and Condroz regions of Wallonia (Belgium). In order to assess the probability of occurrence of muddy floods in specific places, a muddy flood hazard prediction model has been built. It was used to test 11 different explanatory variables in simple and multiple logistic regressions approaches. A database of 442 muddy flood-affected sites and an equal number of homologous non flooded sites was used. For each site, relief, land use, sediment production and sediment connectivity of the contributing area were extracted. To assess the prediction quality of the model, we proceeded to a validation using 48 new pairs of homologous sites. Based on Akaïke Information Criterion (AIC), we determined that the best muddy flood hazard assessment model requires a total of 6 explanatory variable as inputs: the spatial aggregation of the artificialized land, the sediment connectivity, the artificialized land proximity to the outlet, the proportion of artificialized land, the mean slope and the Gravelius index of compactness of the contributive area. The artificialized land properties listed above showed to improve substantially the model quality (p-values from 10e-10 to 10e-4). All of the 3 properties showed negative correlation with the muddy flood hazard. These results highlight the importance of considering the artificialized land characteristics in the sediment transport assessment models. Indeed, artificialized land such as roads may dramatically deviate flows and influence the connectivity in the landscape. Besides the artificialized land properties, the sediment connectivity showed significant explanatory power (p-value of 10e-11). A positive correlation between the sediment connectivity and the muddy flood hazard was found, ranging from 0.3 to 0.45 depending on the sediment connectivity index. Several studies already have highlighted the importance of this parameter in the sediment transport characterization in the landscape. Using the best muddy flood probability of occurrence threshold value of 0.49, the validation of the best multiple logistic regression resulted in a prediction quality of 75.6% (original dataset) and 81.2% (secondary dataset). The developed statistical model could be used as a reliable tool to target muddy floods mitigation measures in sites resulting with the highest muddy floods hazard.
Validating the Proton Prediction System (PPS)
2006-12-01
hazards for astro - proton fluence model (Feynman et al., 2002) fits nauts on the missions to the Moon and Mars observed SEP event fluences of E>10MeV...events limited the useful PPS test cases to 78 of the J(E>10MeV) = 347 x ( Fx )0.941, (3) 101 solar flares. Although they can be serious radiation...hazards (Reames, 1999), PPS does not where Fx is the GOES 1-8 A X-ray flare half-power predict the E> 10MeV peaks often seen during the fluence in J cm -2
Abdominal Circumference Versus Body Mass Index as Predictors of Lower Extremity Overuse Injury Risk.
Nye, Nathaniel S; Kafer, Drew S; Olsen, Cara; Carnahan, David H; Crawford, Paul F
2018-02-01
Abdominal circumference (AC) is superior to body mass index (BMI) as a measure of risk for various health outcomes. Our objective was to compare AC and BMI as predictors of lower extremity overuse injury (LEOI) risk. Retrospective review of electronic medical records of 79,868 US Air Force personnel over a 7-year period (2005-2011) for incidence of new LEOI. Subjects were stratified by BMI and AC. Injury risk for BMI/AC subgroups was calculated using Kaplan-Meier curves and Cox proportional-hazards regression. Receiver operating characteristic curves with area under the curve were used to compare each model's predictive value. Cox proportional-hazards regression showed significant risk association between elevated BMI, AC, and all injury types, with hazard ratios ranging 1.230-3.415 for obese versus normal BMI and 1.665-3.893 for high-risk versus low-risk AC (P < .05 for all measures). Receiver operating characteristic curves with area under the curve showed equivalent performance between BMI and AC for predicting all injury types. However, the combined model (AC and BMI) showed improved predictive ability over either model alone for joint injury, overall LEOI, and most strongly for osteoarthritis. Although AC and BMI alone performed similarly well, a combined approach using BMI and AC together improved risk estimation for LEOI.
EVALUATION OF UNSATURATED/VADOSE ZONE MODELS FOR SUPERFUND SITES
Mathematical models of water and chemical movement in soils are being used as decision aids for defining groundwater protection practices for Superfund sites. Numerous transport models exist for predicting movementand degradation of hazardous chemicals through soils. Many of thes...
EVALUATION OF UNSATURATED/VADOSE ZONE MODELS FOR SUPERFUND SITES
Mathematical models of water and chemical movement in soils are being used as decision aids for defining groundwater protection practices for Superfund sites. umerous transport models exist for predicting movement and degradation of hazardous chemicals through soil& Many of these...
Multiscale modeling and simulation of embryogenesis for in silico predictive toxicology (WC9)
Translating big data from alternative and HTS platforms into hazard identification and risk assessment is an important need for predictive toxicology and for elucidating adverse outcome pathways (AOPs) in developmental toxicity. Understanding how chemical disruption of molecular ...
Ensemble of ground subsidence hazard maps using fuzzy logic
NASA Astrophysics Data System (ADS)
Park, Inhye; Lee, Jiyeong; Saro, Lee
2014-06-01
Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.
A high-resolution global flood hazard model
NASA Astrophysics Data System (ADS)
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
Debris flow hazards mitigation--Mechanics, prediction, and assessment
Chen, C.-L.; Major, J.J.
2007-01-01
These proceedings contain papers presented at the Fourth International Conference on Debris-Flow Hazards Mitigation: Mechanics, Prediction, and Assessment held in Chengdu, China, September 10-13, 2007. The papers cover a wide range of topics on debris-flow science and engineering, including the factors triggering debris flows, geomorphic effects, mechanics of debris flows (e.g., rheology, fluvial mechanisms, erosion and deposition processes), numerical modeling, various debris-flow experiments, landslide-induced debris flows, assessment of debris-flow hazards and risk, field observations and measurements, monitoring and alert systems, structural and non-structural countermeasures against debris-flow hazards and case studies. The papers reflect the latest devel-opments and advances in debris-flow research. Several studies discuss the development and appli-cation of Geographic Information System (GIS) and Remote Sensing (RS) technologies in debris-flow hazard/risk assessment. Timely topics presented in a few papers also include the development of new or innovative techniques for debris-flow monitoring and alert systems, especially an infra-sound acoustic sensor for detecting debris flows. Many case studies illustrate a wide variety of debris-flow hazards and related phenomena as well as their hazardous effects on human activities and settlements.
NASA Technical Reports Server (NTRS)
Hamilton, Douglas; Kramer, Leonard; Mikatarian, Ron; Polk, James; Duncan, Michael; Koontz, Steven
2010-01-01
The models predict that, for low voltage exposures in the space suit, physiologically active current could be conducted across the crew member causing catastrophic hazards. Future work with Naval Health Research Center Detachment Directed Energy Bio-effects Laboratory is being proposed to analyze additional current paths across the human torso and upper limbs. These models may need to be verified with human studies.
Playing against nature: improving earthquake hazard mitigation
NASA Astrophysics Data System (ADS)
Stein, S. A.; Stein, J.
2012-12-01
The great 2011 Tohoku earthquake dramatically demonstrated the need to improve earthquake and tsunami hazard assessment and mitigation policies. The earthquake was much larger than predicted by hazard models, and the resulting tsunami overtopped coastal defenses, causing more than 15,000 deaths and $210 billion damage. Hence if and how such defenses should be rebuilt is a challenging question, because the defences fared poorly and building ones to withstand tsunamis as large as March's is too expensive,. A similar issue arises along the Nankai Trough to the south, where new estimates warning of tsunamis 2-5 times higher than in previous models raise the question of what to do, given that the timescale on which such events may occur is unknown. Thus in the words of economist H. Hori, "What should we do in face of uncertainty? Some say we should spend our resources on present problems instead of wasting them on things whose results are uncertain. Others say we should prepare for future unknown disasters precisely because they are uncertain". Thus society needs strategies to mitigate earthquake and tsunami hazards that make economic and societal sense, given that our ability to assess these hazards is poor, as illustrated by highly destructive earthquakes that often occur in areas predicted by hazard maps to be relatively safe. Conceptually, we are playing a game against nature "of which we still don't know all the rules" (Lomnitz, 1989). Nature chooses tsunami heights or ground shaking, and society selects the strategy to minimize the total costs of damage plus mitigation costs. As in any game of chance, we maximize our expectation value by selecting the best strategy, given our limited ability to estimate the occurrence and effects of future events. We thus outline a framework to find the optimal level of mitigation by balancing its cost against the expected damages, recognizing the uncertainties in the hazard estimates. This framework illustrates the role of the uncertainties and the need to candidly assess them. It can be applied to exploring policies under various hazard scenarios and mitigating other natural hazards.ariation in total cost, the sum of expected loss and mitigation cost, as a function of mitigation level. The optimal level of mitigation, n*, minimizes the total cost. The expected loss depends on the hazard model, so the better the hazard model, the better the mitigation policy (Stein and Stein, 2012).
Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents
NASA Technical Reports Server (NTRS)
Williamson, G. G.; Lewellen, W. S.; Teske, M. E.
1977-01-01
The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.
National assessment of hurricane-induced coastal erosion hazards--Gulf of Mexico
Stockdon, Hilary F.; Doran, Kara S.; Thompson, David M.; Sopkin, Kristin L.; Plant, Nathaniel G.; Sallenger, Asbury H.
2012-01-01
Sandy beaches provide a natural barrier between the ocean and inland communities, ecosystems, and resources. However, these dynamic environments move and change in response to winds, waves, and currents. During a hurricane, these changes can be large and sometimes catastrophic. High waves and storm surge act together to erode beaches and inundate low-lying lands, putting inland communities at risk. A decade of USGS research on storm-driven coastal change hazards has provided the data and modeling capabilities to identify areas of our coastline that are likely to experience extreme and potentially hazardous erosion during a hurricane. This report defines hurricane-induced coastal erosion hazards for sandy beaches along the U.S. Gulf of Mexico coastline. The analysis is based on a storm-impact scaling model that uses observations of beach morphology combined with sophisticated hydrodynamic models to predict how the coast will respond to the direct landfall of category 1-5 hurricanes. Hurricane-induced water levels, due to both surge and waves, are compared to beach and dune elevations to determine the probabilities of three types of coastal change: collision (dune erosion), overwash, and inundation. As new beach morphology observations and storm predictions become available, this analysis will be updated to describe how coastal vulnerability to storms will vary in the future.
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
Shek, L L; Godolphin, W
1988-10-01
The independent prognostic effects of certain clinical and pathological variables measured at the time of primary diagnosis were assessed with Cox multivariate regression analysis. The 859 patients with primary breast cancer, on which the proportional hazards model was based, had a median follow-up of 60 months. Axillary nodal status (categorized as N0, N1-3 or N4+) was the most significant and independent factor in overall survival, but inclusion of TNM stage, estrogen receptor (ER) concentration and tumor necrosis significantly improved survival predictions. Predictions made with the model showed striking subset survival differences within stage: 5-year survival from 36% (N4+, loge[ER] = 0, marked necrosis) to 96% (N0, loge[ER] = 6, no necrosis) in TNM I, and from 0 to 70% for the same categories in TNM IV. Results of the model were used to classify patients into four distinct risk groups according to a derived hazard index. An 8-fold variation in survival was seen with the highest (greater than 3) to lowest index values (less than 1). Each hazard index level included patients with varied combinations of the above factors, but could be considered to denote the same degree of risk of breast cancer mortality. A model with ER concentration, nodal status, and tumor necrosis was found to best predict survival after disease recurrence in 369 patients, thus confirming the enduring biological significance of these factors.
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
NASA Astrophysics Data System (ADS)
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
Hashida, Masahiro; Kamezaki, Ryousuke; Goto, Makoto; Shiraishi, Junji
2017-03-01
The ability to predict hazards in possible situations in a general X-ray examination room created for Kiken-Yochi training (KYT) is quantified by use of free-response receiver-operating characteristics (FROC) analysis for determining whether the total number of years of clinical experience, involvement in general X-ray examinations, occupation, and training each have an impact on the hazard prediction ability. Twenty-three radiological technologists (RTs) (years of experience: 2-28), four nurses (years of experience: 15-19), and six RT students observed 53 scenes of KYT: 26 scenes with hazardous points (hazardous points are those that might cause injury to patients) and 27 scenes without points. Based on the results of these observations, we calculated the alternative free-response receiver-operating characteristic (AFROC) curve and the figure of merit (FOM) to quantify the hazard prediction ability. The results showed that the total number of years of clinical experience did not have any impact on hazard prediction ability, whereas recent experience with general X-ray examinations greatly influenced this ability. In addition, the hazard prediction ability varied depending on the occupations of the observers while they were observing the same scenes in KYT. The hazard prediction ability of the radiologic technology students was improved after they had undergone patient safety training. This proposed method with FROC observer study enabled the quantification and evaluation of the hazard prediction capability, and the application of this approach to clinical practice may help to ensure the safety of examinations and treatment in the radiology department.
A high‐resolution global flood hazard model†
Smith, Andrew M.; Bates, Paul D.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-01-01
Abstract Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data‐scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross‐disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ∼90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high‐resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ∼1 km, mean absolute error in flooded fraction falls to ∼5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2‐D only variant and an independently developed pan‐European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next‐generation global terrain data sets will offer the best prospect for a step‐change improvement in model performance. PMID:27594719
NASA Astrophysics Data System (ADS)
Sanchez, E. Y.; Colman Lerner, J. E.; Porta, A.; Jacovkis, P. M.
2013-01-01
The adverse health effects of the release of hazardous substances into the atmosphere continue being a matter of concern, especially in densely populated urban regions. Emergency responders need to have estimates of these adverse health effects in the local population to aid planning, emergency response, and recovery efforts. For this purpose, models that predict the transport and dispersion of hazardous materials are as necessary as those that estimate the adverse health effects in the population. In this paper, we present the results obtained by coupling a Computational Fluid Dynamics model, FLACS (FLame ACceleration Simulator), with an exposure model, DDC (Damage Differential Coupling). This coupled model system is applied to a scenario of hypothetical release of chlorine with obstacles, such as buildings, and the results show how it is capable of predicting the atmospheric dispersion of hazardous chemicals, and the adverse health effects in the exposed population, to support decision makers both in charge of emergency planning and in charge of real-time response. The results obtained show how knowing the influence of obstacles in the trajectory of the toxic cloud and in the diffusion of the pollutants transported, and obtaining dynamic information of the potentially affected population and of associated symptoms, contribute to improve the planning of the protection and response measures.
The influence of hazard models on GIS-based regional risk assessments and mitigation policies
Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.
2006-01-01
Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.
NASA Astrophysics Data System (ADS)
Blauhut, V.; Stahl, K.; Stagge, J. H.; Tallaksen, L. M.; De Stefano, L.; Vogt, J.
2015-12-01
Drought is one of the most costly natural hazards in Europe. Due to its complexity, drought risk, the combination of the natural hazard and societal vulnerability, is difficult to define and challenging to detect and predict, as the impacts of drought are very diverse, covering the breadth of socioeconomic and environmental systems. Pan-European maps of drought risk could inform the elaboration of guidelines and policies to address its documented severity and impact across borders. This work (1) tests the capability of commonly applied hazard indicators and vulnerability factors to predict annual drought impact occurrence for different sectors and macro regions in Europe and (2) combines information on past drought impacts, drought hazard indicators, and vulnerability factors into estimates of drought risk at the pan-European scale. This "hybrid approach" bridges the gap between traditional vulnerability assessment and probabilistic impact forecast in a statistical modelling framework. Multivariable logistic regression was applied to predict the likelihood of impact occurrence on an annual basis for particular impact categories and European macro regions. The results indicate sector- and macro region specific sensitivities of hazard indicators, with the Standardised Precipitation Evapotranspiration Index for a twelve month aggregation period (SPEI-12) as the overall best hazard predictor. Vulnerability factors have only limited ability to predict drought impacts as single predictor, with information about landuse and water resources as best vulnerability-based predictors. (3) The application of the "hybrid approach" revealed strong regional (NUTS combo level) and sector specific differences in drought risk across Europe. The majority of best predictor combinations rely on a combination of SPEI for shorter and longer aggregation periods, and a combination of information on landuse and water resources. The added value of integrating regional vulnerability information with drought risk prediction could be proven. Thus, the study contributes to the overall understanding of drivers of drought impacts, current practice of drought indicators selection for specific application, and drought risk assessment.
Computational Approaches to Chemical Hazard Assessment
Luechtefeld, Thomas; Hartung, Thomas
2018-01-01
Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769
NASA Technical Reports Server (NTRS)
Stewart, R. B.; Grose, W. L.
1975-01-01
Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.
A Model For Rapid Estimation of Economic Loss
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2012-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
2002-03-01
source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data
Probabilistic Seismic Hazard Maps for Ecuador
NASA Astrophysics Data System (ADS)
Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.
2017-12-01
A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.
Volcanic ash melting under conditions relevant to ash turbine interactions
Song, Wenjia; Lavallée, Yan; Hess, Kai-Uwe; Kueppers, Ulrich; Cimarelli, Corrado; Dingwell, Donald B.
2016-01-01
The ingestion of volcanic ash by jet engines is widely recognized as a potentially fatal hazard for aircraft operation. The high temperatures (1,200–2,000 °C) typical of jet engines exacerbate the impact of ash by provoking its melting and sticking to turbine parts. Estimation of this potential hazard is complicated by the fact that chemical composition, which affects the temperature at which volcanic ash becomes liquid, can vary widely amongst volcanoes. Here, based on experiments, we parameterize ash behaviour and develop a model to predict melting and sticking conditions for its global compositional range. The results of our experiments confirm that the common use of sand or dust proxy is wholly inadequate for the prediction of the behaviour of volcanic ash, leading to overestimates of sticking temperature and thus severe underestimates of the thermal hazard. Our model can be used to assess the deposition probability of volcanic ash in jet engines. PMID:26931824
Close, Rebecca; Watts, Michael J.; Ander, E. Louise; Smedley, Pauline L.; Verlander, Neville Q.; Gregory, Martin; Middleton, Daniel R. S.; Polya, David A.; Studden, Mike; Leonardi, Giovanni S.
2017-01-01
Approximately one million people in the UK are served by private water supplies (PWS) where main municipal water supply system connection is not practical or where PWS is the preferred option. Chronic exposure to contaminants in PWS may have adverse effects on health. South West England is an area with elevated arsenic concentrations in groundwater and over 9000 domestic dwellings here are supplied by PWS. There remains uncertainty as to the extent of the population exposed to arsenic (As), and the factors predicting such exposure. We describe a hazard assessment model based on simplified geology with the potential to predict exposure to As in PWS. Households with a recorded PWS in Cornwall were recruited to take part in a water sampling programme from 2011 to 2013. Bedrock geologies were aggregated and classified into nine Simplified Bedrock Geological Categories (SBGC), plus a cross-cutting “mineralized” area. PWS were sampled by random selection within SBGCs and some 508 households volunteered for the study. Transformations of the data were explored to estimate the distribution of As concentrations for PWS by SBGC. Using the distribution per SBGC, we predict the proportion of dwellings that would be affected by high concentrations and rank the geologies according to hazard. Within most SBGCs, As concentrations were found to have log-normal distributions. Across these areas, the proportion of dwellings predicted to have drinking water over the prescribed concentration value (PCV) for As ranged from 0% to 20%. From these results, a pilot predictive model was developed calculating the proportion of PWS above the PCV for As and hazard ranking supports local decision making and prioritization. With further development and testing, this can help local authorities predict the number of dwellings that might fail the PCV for As, based on bedrock geology. The model presented here for Cornwall could be applied in areas with similar geologies. Application of the method requires independent validation and further groundwater-derived PWS sampling on other geological formations. PMID:29194429
Validated Risk Score for Predicting 6-Month Mortality in Infective Endocarditis.
Park, Lawrence P; Chu, Vivian H; Peterson, Gail; Skoutelis, Athanasios; Lejko-Zupa, Tatjana; Bouza, Emilio; Tattevin, Pierre; Habib, Gilbert; Tan, Ren; Gonzalez, Javier; Altclas, Javier; Edathodu, Jameela; Fortes, Claudio Querido; Siciliano, Rinaldo Focaccia; Pachirat, Orathai; Kanj, Souha; Wang, Andrew
2016-04-18
Host factors and complications have been associated with higher mortality in infective endocarditis (IE). We sought to develop and validate a model of clinical characteristics to predict 6-month mortality in IE. Using a large multinational prospective registry of definite IE (International Collaboration on Endocarditis [ICE]-Prospective Cohort Study [PCS], 2000-2006, n=4049), a model to predict 6-month survival was developed by Cox proportional hazards modeling with inverse probability weighting for surgery treatment and was internally validated by the bootstrapping method. This model was externally validated in an independent prospective registry (ICE-PLUS, 2008-2012, n=1197). The 6-month mortality was 971 of 4049 (24.0%) in the ICE-PCS cohort and 342 of 1197 (28.6%) in the ICE-PLUS cohort. Surgery during the index hospitalization was performed in 48.1% and 54.0% of the cohorts, respectively. In the derivation model, variables related to host factors (age, dialysis), IE characteristics (prosthetic or nosocomial IE, causative organism, left-sided valve vegetation), and IE complications (severe heart failure, stroke, paravalvular complication, and persistent bacteremia) were independently associated with 6-month mortality, and surgery was associated with a lower risk of mortality (Harrell's C statistic 0.715). In the validation model, these variables had similar hazard ratios (Harrell's C statistic 0.682), with a similar, independent benefit of surgery (hazard ratio 0.74, 95% CI 0.62-0.89). A simplified risk model was developed by weight adjustment of these variables. Six-month mortality after IE is ≈25% and is predicted by host factors, IE characteristics, and IE complications. Surgery during the index hospitalization is associated with lower mortality but is performed less frequently in the highest risk patients. A simplified risk model may be used to identify specific risk subgroups in IE. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Salisbury, Margaret L; Xia, Meng; Zhou, Yueren; Murray, Susan; Tayob, Nabihah; Brown, Kevin K; Wells, Athol U; Schmidt, Shelley L; Martinez, Fernando J; Flaherty, Kevin R
2016-02-01
Idiopathic pulmonary fibrosis is a progressive lung disease with variable course. The Gender-Age-Physiology (GAP) Index and staging system uses clinical variables to stage mortality risk. It is unknown whether clinical staging predicts future decline in pulmonary function. We assessed whether the GAP stage predicts future pulmonary function decline and whether interval pulmonary function change predicts mortality after accounting for stage. Patients with idiopathic pulmonary fibrosis (N = 657) were identified retrospectively at three tertiary referral centers, and baseline GAP stages were assessed. Mixed models were used to describe average trajectories of FVC and diffusing capacity of the lung for carbon monoxide (Dlco). Multivariable Cox proportional hazards models were used to assess whether declines in pulmonary function ≥ 10% in 6 months predict mortality after accounting for GAP stage. Over a 2-year period, GAP stage was not associated with differences in yearly lung function decline. After accounting for stage, a 10% decrease in FVC or Dlco over 6 months independently predicted death or transplantation (FVC hazard ratio, 1.37; Dlco hazard ratio, 1.30; both, P ≤ .03). Patients with GAP stage 2 with declining pulmonary function experienced a survival profile similar to patients with GAP stage 3, with 1-year event-free survival of 59.3% (95% CI, 49.4-67.8) vs 56.9% (95% CI, 42.2-69.1). Baseline GAP stage predicted death or lung transplantation but not the rate of future pulmonary function decline. After accounting for GAP stage, a decline of ≥ 10% over 6 months independently predicted death or lung transplantation. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Mechanistic modeling of developmental defects through computational embryology (WC10th)
Abstract: An important consideration for 3Rs is to identify developmental hazards utilizing mechanism-based in vitro assays (e.g., ToxCast) and in silico predictive models. Steady progress has been made with agent-based models that recapitulate morphogenetic drivers for angiogen...
Khalil, Hussein; Olsson, Gert; Magnusson, Magnus; Evander, Magnus; Hörnfeldt, Birger; Ecke, Frauke
2017-07-26
To predict the risk of infectious diseases originating in wildlife, it is important to identify habitats that allow the co-occurrence of pathogens and their hosts. Puumala hantavirus (PUUV) is a directly-transmitted RNA virus that causes hemorrhagic fever in humans, and is carried and transmitted by the bank vole (Myodes glareolus). In northern Sweden, bank voles undergo 3-4 year population cycles, during which their spatial distribution varies greatly. We used boosted regression trees; a technique inspired by machine learning, on a 10 - year time-series (fall 2003-2013) to develop a spatial predictive model assessing seasonal PUUV hazard using micro-habitat variables in a landscape heavily modified by forestry. We validated the models in an independent study area approx. 200 km away by predicting seasonal presence of infected bank voles in a five-year-period (2007-2010 and 2015). The distribution of PUUV-infected voles varied seasonally and inter-annually. In spring, micro-habitat variables related to cover and food availability in forests predicted both bank vole and infected bank vole presence. In fall, the presence of PUUV-infected voles was generally restricted to spruce forests where cover was abundant, despite the broad landscape distribution of bank voles in general. We hypothesize that the discrepancy in distribution between infected and uninfected hosts in fall, was related to higher survival of PUUV and/or PUUV-infected voles in the environment, especially where cover is plentiful. Moist and mesic old spruce forests, with abundant cover such as large holes and bilberry shrubs, also providing food, were most likely to harbor infected bank voles. The models developed using long-term and spatially extensive data can be extrapolated to other areas in northern Fennoscandia. To predict the hazard of directly transmitted zoonoses in areas with unknown risk status, models based on micro-habitat variables and developed through machine learning techniques in well-studied systems, could be used.
Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...
Population-based absolute risk estimation with survey data
Kovalchik, Stephanie A.; Pfeiffer, Ruth M.
2013-01-01
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614
Using Socioeconomic Data to Calibrate Loss Estimates
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2013-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
NASA Astrophysics Data System (ADS)
Havens, H.; Luther, M. E.; Meyers, S. D.
2008-12-01
Response time is critical following a hazardous spill in a marine environment and rapid assessment of circulation patterns can mitigate the damage. Tampa Bay Physical Oceanographic Real-Time System (TB- PORTS) data are used to drive a numerical circulation model of the bay for the purpose of hazardous material spill response, monitoring of human health risks, and environmental protection and management. The model is capable of rapidly producing forecast simulations that, in the event of a human health or ecosystem threat, can alert authorities to areas in Tampa Bay with a high probability of being affected by the material. Responders to an anhydrous ammonia spill in November 2007 in Tampa Bay utilized the numerical model of circulation in the estuary to predict where the spill was likely to be transported. The model quickly generated a week-long simulation predicting how winds and currents might move the spill around the bay. The physical mechanisms transporting ammonium alternated from being tidally driven for the initial two days following the spill to a more classical two-layered circulation for the remainder of the simulation. Velocity profiles of Tampa Bay reveal a strong outward flowing current present at the time of the simulation which acted as a significant transport mechanism for ammonium within the bay. Probability distributions, calculated from the predicted model trajectories, guided sampling in the days after the spill resulting in the detection of a toxic Pseudo-nitzschia bloom that likely was initiated as a result of the anhydrous ammonia spill. The prediction system at present is only accessible to scientists in the Ocean Monitoring and Prediction Lab (OMPL) at the University of South Florida. The forecast simulations are compiled into an animation that is provided to end users at their request. In the future, decision makers will be allowed access to an online component of the coastal prediction system that can be used to manage response and mitigation efforts in order to reduce the risk from such disasters as a hazardous material spills or ship groundings.
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
NASA Astrophysics Data System (ADS)
Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo
2017-11-01
The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.
Impact from Magnitude-Rupture Length Uncertainty on Seismic Hazard and Risk
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Kane, D. L.
2015-12-01
In probabilistic seismic hazard and risk assessments seismic sources are typically divided into two groups: fault sources (to model known faults) and background sources (to model unknown faults). In areas like the Central and Eastern United States and Hawaii the hazard and risk is driven primarily by background sources. Background sources can be modeled as areas, points or pseudo-faults. When background sources are modeled as pseudo-faults, magnitude-length or magnitude-area scaling relationships are required to construct these pseudo-faults. However the uncertainty associated with these relationships is often ignored or discarded in hazard and risk models, particularly when faults sources are the dominant contributor. Conversely, in areas modeled only with background sources these uncertainties are much more significant. In this study we test the impact of using various relationships and the resulting epistemic uncertainties on the seismic hazard and risk in the Central and Eastern United States and Hawaii. It is common to use only one magnitude length relationship when calculating hazard. However, Stirling et al. (2013) showed that for a given suite of magnitude-rupture length relationships the variability can be quite large. The 2014 US National Seismic Hazard Maps (Petersen et al., 2014) used one magnitude-rupture length relationship (Somerville, et al., 2001) in the Central and Eastern United States, and did not consider variability in the seismogenic rupture plane width. Here we use a suite of metrics to compare the USGS approach with these variable uncertainty models to assess 1) the impact on hazard and risk and 2) the epistemic uncertainty associated with choice of relationship. In areas where the seismic hazard is dominated by larger crustal faults (e.g. New Madrid) the choice of magnitude-rupture length relationship has little impact on the hazard or risk. However away from these regions, the choice of relationship is more significant and may approach the size of the uncertainty associated with the ground motion prediction equation suite.
Gerrits, Esther G; Alkhalaf, Alaa; Landman, Gijs W D; van Hateren, Kornelis J J; Groenier, Klaas H; Struck, Joachim; Schulte, Janin; Gans, Reinold O B; Bakker, Stephan J L; Kleefstra, Nanne; Bilo, Henk J G
2014-01-01
Oxidative stress plays an underlying pathophysiologic role in the development of diabetes complications. The aim of this study was to investigate peroxiredoxin 4 (Prx4), a proposed novel biomarker of oxidative stress, and its association with and capability as a biomarker in predicting (cardiovascular) mortality in type 2 diabetes mellitus. Prx4 was assessed in baseline serum samples of 1161 type 2 diabetes patients. Cox proportional hazard models were used to evaluate the relationship between Prx4 and (cardiovascular) mortality. Risk prediction capabilities of Prx4 for (cardiovascular) mortality were assessed with Harrell's C statistic, the integrated discrimination improvement and net reclassification improvement. Mean age was 67 and the median diabetes duration was 4.0 years. After a median follow-up period of 5.8 years, 327 patients died; 137 cardiovascular deaths. Prx4 was associated with (cardiovascular) mortality. The Cox proportional hazard models added the variables: Prx4 (model 1); age and gender (model 2), and BMI, creatinine, smoking, diabetes duration, systolic blood pressure, cholesterol-HDL ratio, history of macrovascular complications, and albuminuria (model 3). Hazard ratios (HR) (95% CI) for cardiovascular mortality were 1.93 (1.57 - 2.38), 1.75 (1.39 - 2.20), and 1.63 (1.28 - 2.09) for models 1, 2 and 3, respectively. HR for all-cause mortality were 1.73 (1.50 - 1.99), 1.50 (1.29 - 1.75), and 1.44 (1.23 - 1.67) for models 1, 2 and 3, respectively. Addition of Prx4 to the traditional risk factors slightly improved risk prediction of (cardiovascular) mortality. Prx4 is independently associated with (cardiovascular) mortality in type 2 diabetes patients. After addition of Prx4 to the traditional risk factors, there was a slightly improvement in risk prediction of (cardiovascular) mortality in this patient group.
The Impact Hazard in the Context of Other Natural Hazards and Predictive Science
NASA Astrophysics Data System (ADS)
Chapman, C. R.
1998-09-01
The hazard due to impact of asteroids and comets has been recognized as analogous, in some ways, to other infrequent but consequential natural hazards (e.g. floods and earthquakes). Yet, until recently, astronomers and space agencies have felt no need to do what their colleagues and analogous agencies must do in order the assess, quantify, and communicate predictions to those with a practical interest in the predictions (e.g. public officials who must assess the threats, prepare for mitigation, etc.). Recent heightened public interest in the impact hazard, combined with increasing numbers of "near misses" (certain to increase as Spaceguard is implemented) requires that astronomers accept the responsibility to place their predictions and assessments in terms that may be appropriately considered. I will report on preliminary results of a multi-year GSA/NCAR study of "Prediction in the Earth Sciences: Use and Misuse in Policy Making" in which I have represented the impact hazard, while others have treated earthquakes, floods, weather, global climate change, nuclear waste disposal, acid rain, etc. The impact hazard presents an end-member example of a natural hazard, helping those dealing with more prosaic issues to learn from an extreme. On the other hand, I bring to the astronomical community some lessons long adopted in other cases: the need to understand the policy purposes of impact predictions, the need to assess potential societal impacts, the requirements to very carefully assess prediction uncertainties, considerations of potential public uses of the predictions, awareness of ethical considerations (e.g. conflicts of interest) that affect predictions and acceptance of predictions, awareness of appropriate means for publicly communicating predictions, and considerations of the international context (especially for a hazard that knows no national boundaries).
Ground Motion Prediction Models for Caucasus Region
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Computational Embryology and Predictive Toxicology of Cleft Palate
Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...
A PROBABILISTIC MODELING FRAMEWORK FOR PREDICTING POPULATION EXPOSURES TO BENZENE
The US Environmental Protection Agency (EPA) is modifying their probabilistic Stochastic Human Exposure Dose Simulation (SHEDS) model to assess aggregate exposures to air toxics. Air toxics include urban Hazardous Air Pollutants (HAPS) such as benzene from mobile sources, part...
Amiri, Zohreh; Mohammad, Kazem; Mahmoudi, Mahmood; Parsaeian, Mahbubeh; Zeraati, Hojjat
2013-01-01
There are numerous unanswered questions in the application of artificial neural network models for analysis of survival data. In most studies, independent variables have been studied as qualitative dichotomous variables, and results of using discrete and continuous quantitative, ordinal, or multinomial categorical predictive variables in these models are not well understood in comparison to conventional models. This study was designed and conducted to examine the application of these models in order to determine the survival of gastric cancer patients, in comparison to the Cox proportional hazards model. We studied the postoperative survival of 330 gastric cancer patients who suffered surgery at a surgical unit of the Iran Cancer Institute over a five-year period. Covariates of age, gender, history of substance abuse, cancer site, type of pathology, presence of metastasis, stage, and number of complementary treatments were entered in the models, and survival probabilities were calculated at 6, 12, 18, 24, 36, 48, and 60 months using the Cox proportional hazards and neural network models. We estimated coefficients of the Cox model and the weights in the neural network (with 3, 5, and 7 nodes in the hidden layer) in the training group, and used them to derive predictions in the study group. Predictions with these two methods were compared with those of the Kaplan-Meier product limit estimator as the gold standard. Comparisons were performed with the Friedman and Kruskal-Wallis tests. Survival probabilities at different times were determined using the Cox proportional hazards and a neural network with three nodes in the hidden layer; the ratios of standard errors with these two methods to the Kaplan-Meier method were 1.1593 and 1.0071, respectively, revealed a significant difference between Cox and Kaplan-Meier (P < 0.05) and no significant difference between Cox and the neural network, and the neural network and the standard (Kaplan-Meier), as well as better accuracy for the neural network (with 3 nodes in the hidden layer). Probabilities of survival were calculated using three neural network models with 3, 5, and 7 nodes in the hidden layer, and it has been observed that none of the predictions was significantly different from results with the Kaplan-Meier method and they appeared more comparable towards the last months (fifth year). However, we observed better accuracy using the neural network with 5 nodes in the hidden layer. Using the Cox proportional hazards and a neural network with 3 nodes in the hidden layer, we found enhanced accuracy with the neural network model. Neural networks can provide more accurate predictions for survival probabilities compared to the Cox proportional hazards mode, especially now that advances in computer sciences have eliminated limitations associated with complex computations. It is not recommended in order to adding too many hidden layer nodes because sample size related effects can reduce the accuracy. We recommend increasing the number of nodes to a point that increased accuracy continues (decrease in mean standard error), however increasing nodes should cease when a change in this trend is observed.
Foraker, Randi E; Greiner, Melissa; Sims, Mario; Tucker, Katherine L; Towfighi, Amytis; Bidulescu, Aurelian; Shoben, Abigail B; Smith, Sakima; Talegawkar, Sameera; Blackshear, Chad; Wang, Wei; Hardy, Natalie Chantelle; O'Brien, Emily
2016-07-01
Evidence from existing cohort studies supports the prediction of incident coronary heart disease and stroke using 10-year cardiovascular disease (CVD) risk scores and the American Heart Association/American Stroke Association's cardiovascular health (CVH) metric. We included all Jackson Heart Study participants with complete scoring information at the baseline study visit (2000-2004) who had no history of stroke (n = 4,140). We used Kaplan-Meier methods to calculate the cumulative incidence of stroke and used Cox models to estimate hazard ratios and 95% CIs for stroke according to CVD risk and CVH score. We compared the discrimination of the 2 models according to the Harrell c index and plotted predicted vs observed stroke risk calibration plots for each of the 2 models. The median age of the African American participants was 54.5 years, and 65% were female. The cumulative incidence of stroke increased across worsening categories of CVD risk and CVH. A 1-unit increase in CVD risk increased the hazard of stroke (1.07, 1.06-1.08), whereas each 1-unit increase in CVH corresponded to a decreased hazard of stroke (0.76, 0.69-0.83). As evidenced by the c statistics, the CVH model was less discriminating than the CVD risk model (0.59 [0.55-0.64] vs 0.79 [0.76-0.83]). Both scores were associated with incident stroke in a dose-response fashion; however, the CVD risk model was more discriminating than the CVH model. The CVH score may still be preferable for its simplicity in application to broad patient populations and public health efforts. Copyright © 2016 Elsevier Inc. All rights reserved.
Nurses' short-term prediction of violence in acute psychiatric intensive care.
Björkdahl, A; Olsson, D; Palmstierna, T
2006-03-01
To evaluate the short-term predictive capacity of the Brøset Violence Checklist (BVC) when used by nurses in a psychiatric intensive care unit. Seventy-three patients were assessed according to the BVC three times daily. Violent incidents were recorded with the Staff Observation Aggression Scale, revised version. An extended Cox proportional hazards model with multiple events and time-dependent covariates was estimated to evaluate how the highest BVC sum of the last 24 h and its separate items affect the risk for severe violence within the next 24 h. With a BVC sum of one or more, hazard for severe violence was six times higher than if the sum was zero. Four of the six separate items significantly increased the risk for severe violence with hazard ratios between 3.0 and 6.3. Risk for in-patient violence in a short-term perspective can to a high degree be predicted by nurses using the BVC.
Suh, Young Joo; Han, Kyunghwa; Chang, Suyon; Kim, Jin Young; Im, Dong Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Kim, Young Jin; Choi, Byoung Wook
2017-09-01
The SYNergy between percutaneous coronary intervention with TAXus and cardiac surgery (SYNTAX) score is an invasive coronary angiography (ICA)-based score for quantifying the complexity of coronary artery disease (CAD). Although the SYNTAX score was originally developed based on ICA, recent publications have reported that coronary computed tomography angiography (CCTA) is a feasible modality for the estimation of the SYNTAX score.The aim of our study was to investigate the prognostic value of the SYNTAX score, based on CCTA for the prediction of major adverse cardiac and cerebrovascular events (MACCEs) in patients with complex CAD.The current study was approved by the institutional review board of our institution, and informed consent was waived for this retrospective cohort study. We included 251 patients (173 men, mean age 66.0 ± 9.29 years) who had complex CAD [3-vessel disease or left main (LM) disease] on CCTA. SYNTAX score was obtained on the basis of CCTA. Follow-up clinical outcome data regarding composite MACCEs were also obtained. Cox proportional hazards models were developed to predict the risk of MACCEs based on clinical variables, treatment, and computed tomography (CT)-SYNTAX scores.During the median follow-up period of 1517 days, there were 48 MACCEs. Univariate Cox hazards models demonstrated that MACCEs were associated with advanced age, low body mass index (BMI), and dyslipidemia (P < .2). In patients with LM disease, MACCEs were associated with a higher SYNTAX score. In patients with CT-SYNTAX score ≥23, patients who underwent coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention had significantly lower hazard ratios than patients who were treated with medication alone. In multivariate Cox hazards model, advanced age, low BMI, and higher SYNTAX score showed an increased hazard ratio for MACCE, while treatment with CABG showed a lower hazard ratio (P < .2).On the basis of our results, CT-SYNTAX score can be a useful method for noninvasively predicting MACCEs in patients with complex CAD, especially in patients with LM disease.
Abdel Raheem, Ali; Shin, Tae Young; Chang, Ki Don; Santok, Glen Denmer R; Alenzi, Mohamed Jayed; Yoon, Young Eun; Ham, Won Sik; Han, Woong Kyu; Choi, Young Deuk; Rha, Koon Ho
2018-06-19
To develop a predictive nomogram for chronic kidney disease-free survival probability in the long term after partial nephrectomy. A retrospective analysis was carried out of 698 patients with T1 renal tumors undergoing partial nephrectomy at a tertiary academic institution. A multivariable Cox regression analysis was carried out based on parameters proven to have an impact on postoperative renal function. Patients with incomplete data, <12 months follow up and preoperative chronic kidney disease stage III or greater were excluded. The study end-points were to identify independent risk factors for new-onset chronic kidney disease development, as well as to construct a predictive model for chronic kidney disease-free survival probability after partial nephrectomy. The median age was 52 years, median tumor size was 2.5 cm and mean warm ischemia time was 28 min. A total of 91 patients (13.1%) developed new-onset chronic kidney disease at a median follow up of 60 months. The chronic kidney disease-free survival rates at 1, 3, 5 and 10 year were 97.1%, 94.4%, 85.3% and 70.6%, respectively. On multivariable Cox regression analysis, age (1.041, P = 0.001), male sex (hazard ratio 1.653, P < 0.001), diabetes mellitus (hazard ratio 1.921, P = 0.046), tumor size (hazard ratio 1.331, P < 0.001) and preoperative estimated glomerular filtration rate (hazard ratio 0.937, P < 0.001) were independent predictors for new-onset chronic kidney disease. The C-index for chronic kidney disease-free survival was 0.853 (95% confidence interval 0.815-0.895). We developed a novel nomogram for predicting the 5-year chronic kidney disease-free survival probability after on-clamp partial nephrectomy. This model might have an important role in partial nephrectomy decision-making and follow-up plan after surgery. External validation of our nomogram in a larger cohort of patients should be considered. © 2018 The Japanese Urological Association.
Jarvis, J; Seed, M; Elton, R; Sawyer, L; Agius, R
2005-01-01
Aims: To investigate quantitatively, relationships between chemical structure and reported occupational asthma hazard for low molecular weight (LMW) organic compounds; to develop and validate a model linking asthma hazard with chemical substructure; and to generate mechanistic hypotheses that might explain the relationships. Methods: A learning dataset used 78 LMW chemical asthmagens reported in the literature before 1995, and 301 control compounds with recognised occupational exposures and hazards other than respiratory sensitisation. The chemical structures of the asthmagens and control compounds were characterised by the presence of chemical substructure fragments. Odds ratios were calculated for these fragments to determine which were associated with a likelihood of being reported as an occupational asthmagen. Logistic regression modelling was used to identify the independent contribution of these substructures. A post-1995 set of 21 asthmagens and 77 controls were selected to externally validate the model. Results: Nitrogen or oxygen containing functional groups such as isocyanate, amine, acid anhydride, and carbonyl were associated with an occupational asthma hazard, particularly when the functional group was present twice or more in the same molecule. A logistic regression model using only statistically significant independent variables for occupational asthma hazard correctly assigned 90% of the model development set. The external validation showed a sensitivity of 86% and specificity of 99%. Conclusions: Although a wide variety of chemical structures are associated with occupational asthma, bifunctional reactivity is strongly associated with occupational asthma hazard across a range of chemical substructures. This suggests that chemical cross-linking is an important molecular mechanism leading to the development of occupational asthma. The logistic regression model is freely available on the internet and may offer a useful but inexpensive adjunct to the prediction of occupational asthma hazard. PMID:15778257
Salim, Agus; Tai, E Shyong; Tan, Vincent Y; Welsh, Alan H; Liew, Reginald; Naidoo, Nasheen; Wu, Yi; Yuan, Jian-Min; Koh, Woon P; van Dam, Rob M
2016-08-01
In western populations, high-sensitivity C-reactive protein (hsCRP), and to a lesser degree serum creatinine and haemoglobin A1c, predict risk of coronary heart disease (CHD). However, data on Asian populations that are increasingly affected by CHD are sparse and it is not clear whether these biomarkers can be used to improve CHD risk classification. We conducted a nested case-control study within the Singapore Chinese Health Study cohort, with incident 'hard' CHD (myocardial infarction or CHD death) as an outcome. We used data from 965 men (298 cases, 667 controls) and 528 women (143 cases, 385 controls) to examine the utility of hsCRP, serum creatinine and haemoglobin A1c in improving the prediction of CHD risk over and above traditional risk factors for CHD included in the ATP III model. For each sex, the performance of models with only traditional risk factors used in the ATP III model was compared with models with the biomarkers added using weighted Cox proportional hazards analysis. The impact of adding these biomarkers was assessed using the net reclassification improvement index. For men, loge hsCRP (hazard ratio 1.25, 95% confidence interval: 1.05; 1.49) and loge serum creatinine (hazard ratio 4.82, 95% confidence interval: 2.10; 11.04) showed statistically significantly associations with CHD risk when added to the ATP III model. We did not observe a significant association between loge haemoglobin A1c and CHD risk (hazard ratio 1.83, 95% confidence interval: 0.21; 16.06). Adding hsCRP and serum creatinine to the ATP III model improved risk classification in men with a net gain of 6.3% of cases (p-value = 0.001) being reclassified to a higher risk category, while it did not significantly reduce the accuracy of classification for non-cases. For women, squared hsCRP was borderline significantly (hazard ratio 1.01, 95% confidence interval: 1.00; 1.03) and squared serum creatinine was significantly (hazard ratio 1.81, 95% confidence interval: 1.49; 2.21) associated with CHD risk. However, the association between squared haemoglobin A1c and CHD risk was not significant (hazard ratio 1.05, 95% confidence interval: 0.99; 1.12). The addition of hsCRP and serum creatinine to the ATP III model resulted in 3.7% of future cases being reclassified to a higher risk category (p-value = 0.025), while it did not significantly reduce the accuracy of classification for non-cases. Adding hsCRP and serum creatinine, but not haemoglobin A1c, to traditional risk factors improved CHD risk prediction among non-diabetic Singaporean Chinese. The improved risk estimates will allow better identification of individuals at high risk of CHD than existing risk calculators such as the ATP III model. © The European Society of Cardiology 2016.
AIR QUALITY MODELING OF HAZARDOUS POLLUTANTS: CURRENT STATUS AND FUTURE DIRECTIONS
The paper presents a review of current air toxics modeling applications and discusses possible advanced approaches. Many applications require the ability to predict hot spots from industrial sources or large roadways that are needed for community health and Environmental Justice...
Systems Toxicology of Embryo Development (9th Copenhagen Workshop)
An important consideration for predictive toxicology is to identify developmental hazards utilizing mechanism-based in vitro assays (e.g., ToxCast) and in silico multiscale models. Steady progress has been made with agent-based models that recapitulate morphogenetic drivers for a...
Memory Binding Test Predicts Incident Amnestic Mild Cognitive Impairment.
Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman
2016-07-14
The Memory Binding Test (MBT), previously known as Memory Capacity Test, has demonstrated discriminative validity for distinguishing persons with amnestic mild cognitive impairment (aMCI) and dementia from cognitively normal elderly. We aimed to assess the predictive validity of the MBT for incident aMCI. In a longitudinal, community-based study of adults aged 70+, we administered the MBT to 246 cognitively normal elderly adults at baseline and followed them annually. Based on previous work, a subtle reduction in memory binding at baseline was defined by a Total Items in the Paired (TIP) condition score of ≤22 on the MBT. Cox proportional hazards models were used to assess the predictive validity of the MBT for incident aMCI accounting for the effects of covariates. The hazard ratio of incident aMCI was also assessed for different prediction time windows ranging from 4 to 7 years of follow-up, separately. Among 246 controls who were cognitively normal at baseline, 48 developed incident aMCI during follow-up. A baseline MBT reduction was associated with an increased risk for developing incident aMCI (hazard ratio (HR) = 2.44, 95% confidence interval: 1.30-4.56, p = 0.005). When varying the prediction window from 4-7 years, the MBT reduction remained significant for predicting incident aMCI (HR range: 2.33-3.12, p: 0.0007-0.04). Persons with poor performance on the MBT are at significantly greater risk for developing incident aMCI. High hazard ratios up to seven years of follow-up suggest that the MBT is sensitive to early disease.
NASA Astrophysics Data System (ADS)
Rey, Julien; Beauval, Céline; Douglas, John
2018-05-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
NASA Astrophysics Data System (ADS)
Rey, Julien; Beauval, Céline; Douglas, John
2018-02-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
On the adaptive daily forecasting of seismic aftershock hazard
NASA Astrophysics Data System (ADS)
Ebrahimian, Hossein; Jalayer, Fatemeh; Asprone, Domenico; Lombardi, Anna Maria; Marzocchi, Warner; Prota, Andrea; Manfredi, Gaetano
2013-04-01
Post-earthquake ground motion hazard assessment is a fundamental initial step towards time-dependent seismic risk assessment for buildings in a post main-shock environment. Therefore, operative forecasting of seismic aftershock hazard forms a viable support basis for decision-making regarding search and rescue, inspection, repair, and re-occupation in a post main-shock environment. Arguably, an adaptive procedure for integrating the aftershock occurrence rate together with suitable ground motion prediction relations is key to Probabilistic Seismic Aftershock Hazard Assessment (PSAHA). In the short-term, the seismic hazard may vary significantly (Jordan et al., 2011), particularly after the occurrence of a high magnitude earthquake. Hence, PSAHA requires a reliable model that is able to track the time evolution of the earthquake occurrence rates together with suitable ground motion prediction relations. This work focuses on providing adaptive daily forecasts of the mean daily rate of exceeding various spectral acceleration values (the aftershock hazard). Two well-established earthquake occurrence models suitable for daily seismicity forecasts associated with the evolution of an aftershock sequence, namely, the modified Omori's aftershock model and the Epidemic Type Aftershock Sequence (ETAS) are adopted. The parameters of the modified Omori model are updated on a daily basis using Bayesian updating and based on the data provided by the ongoing aftershock sequence based on the methodology originally proposed by Jalayer et al. (2011). The Bayesian updating is used also to provide sequence-based parameter estimates for a given ground motion prediction model, i.e. the aftershock events in an ongoing sequence are exploited in order to update in an adaptive manner the parameters of an existing ground motion prediction model. As a numerical example, the mean daily rates of exceeding specific spectral acceleration values are estimated adaptively for the L'Aquila 2009 aftershock catalog. The parameters of the modified Omori model are estimated in an adaptive manner using the Bayesian updating based on the aftershock events that had already taken place at each day elapsed and using the Italian generic sequence (Lolli and Gasperini 2003) as prior information. For the ETAS model, the real-time daily forecast of the spatio-temporal evolution of the L'Aquila sequence provided for the Italian Civil Protection for managing the emergency (Marzocchi and Lombardi, 2009) is utilized. Moreover, the parameters of the ground motion prediction relation proposed by Sabetta and Pugliese (1996) are updated adaptively and on a daily basis using Bayesian updating based on the ongoing aftershock sequence. Finally, the forecasted daily rates of exceeding (first-mode) spectral acceleration values are compared with observed rates of exceedance calculated based on the wave-forms that have actually taken place. References Jalayer, F., Asprone, D., Prota, A., Manfredi, G. (2011). A decision support system for post-earthquake reliability assessment of structures subjected to after-shocks: an application to L'Aquila earthquake, 2009. Bull. Earthquake Eng. 9(4) 997-1014. Jordan, T.H., Chen Y-T., Gasparini P., Madariaga R., Main I., Marzocchi W., Papadopoulos G., Sobolev G., Yamaoka K., and J. Zschau (2011). Operational earthquake forecasting: State of knowledge and guidelines for implementation, Ann. Geophys. 54(4) 315-391, doi 10.4401/ag-5350. Lolli, B., and P. Gasperini (2003). Aftershocks hazard in Italy part I: estimation of time-magnitude distribution model parameters and computation of probabilities of occurrence. Journal of Seismology 7(2) 235-257. Marzocchi, W., and A.M. Lombardi (2009). Real-time forecasting following a damaging earthquake, Geophys. Res. Lett. 36, L21302, doi: 10.1029/2009GL040233. Sabetta F., A. Pugliese (1996) Estimation of response spectra and simulation of nonstationary earthquake ground motions. Bull Seismol Soc Am 86(2) 337-352.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010.
Jacobs, David E; Nevin, Rick
2006-11-01
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 million pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels > or =10 microg/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, David E.; Nevin, Rick
2006-11-15
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 millionmore » pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels {>=}10 {mu}g/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.« less
Spatial predictive mapping using artificial neural networks
NASA Astrophysics Data System (ADS)
Noack, S.; Knobloch, A.; Etzold, S. H.; Barth, A.; Kallmeier, E.
2014-11-01
The modelling or prediction of complex geospatial phenomena (like formation of geo-hazards) is one of the most important tasks for geoscientists. But in practice it faces various difficulties, caused mainly by the complexity of relationships between the phenomena itself and the controlling parameters, as well by limitations of our knowledge about the nature of physical/ mathematical relationships and by restrictions regarding accuracy and availability of data. In this situation methods of artificial intelligence, like artificial neural networks (ANN) offer a meaningful alternative modelling approach compared to the exact mathematical modelling. In the past, the application of ANN technologies in geosciences was primarily limited due to difficulties to integrate it into geo-data processing algorithms. In consideration of this background, the software advangeo® was developed to provide a normal GIS user with a powerful tool to use ANNs for prediction mapping and data preparation within his standard ESRI ArcGIS environment. In many case studies, such as land use planning, geo-hazards analysis and prevention, mineral potential mapping, agriculture & forestry advangeo® has shown its capabilities and strengths. The approach is able to add considerable value to existing data.
Evaluation of potential risks from ash disposal site leachate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, W.B.; Loh, J.Y.; Bate, M.C.
1999-04-01
A risk-based approach is used to evaluate potential human health risks associated with a discharge from an ash disposal site into a small stream. The RIVRISK model was used to estimate downstream concentrations and corresponding risks. The modeling and risk analyses focus on boron, the constituent of greatest potential concern to public health at the site investigated, in Riddle Run, Pennsylvania. Prior to performing the risk assessment, the model is validated by comparing observed and predicted results. The comparison is good and an uncertainty analysis is provided to explain the comparison. The hazard quotient (HQ) for boron is predicted tomore » be greater than 1 at presently regulated compliance points over a range of flow rates. The reference dose (RfD) currently recommended by the United States Environmental Protection Agency (US EPA) was used for the analyses. However, the toxicity of boron as expressed by the RfD is now under review by both the U.S. EPA and the World Health Organization. Alternative reference doses being examined would produce predicted boron hazard quotients of less than 1 at nearly all flow conditions.« less
Predictive models in hazard assessment of Great Lakes contaminants for fish
Passino, Dora R. May
1986-01-01
A hazard assessment scheme was developed and applied to predict potential harm to aquatic biota of nearly 500 organic compounds detected by gas chromatography/mass spectrometry (GC/MS) in Great Lakes fish. The frequency of occurrence and estimated concentrations of compounds found in lake trout (Salvelinus namaycush) and walleyes (Stizostedion vitreum vitreum) were compared with available manufacturing and discharge information. Bioconcentration potential of the compounds was estimated from available data or from calculations of quantitative structure-activity relationships (QSAR). Investigators at the National Fisheries Research Center-Great Lakes also measured the acute toxicity (48-h EC50's) of 35 representative compounds to Daphnia pulex and compared the results with acute toxicity values generated by QSAR. The QSAR-derived toxicities for several chemicals underestimated the actual acute toxicity by one or more orders of magnitude. A multiple regression of log EC50 on log water solubility and molecular volume proved to be a useful predictive model. Additional models providing insight into toxicity incorporate solvatochromic parameters that measure dipolarity/polarizability, hydrogen bond acceptor basicity, and hydrogen bond donor acidity of the solute (toxicant).
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
Using a 3D CAD plant model to simplify process hazard reviews
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolpa, G.
A Hazard and Operability (HAZOP) review is a formal predictive procedure used to identify potential hazard and operability problems associated with certain processes and facilities. The HAZOP procedure takes place several times during the life cycle of the facility. Replacing plastic models, layout and detail drawings with a 3D CAD electronic model, provides access to process safety information and a detailed level of plant topology that approaches the visualization capability of the imagination. This paper describes the process that is used for adding the use of a 3D CAD model to flowsheets and proven computer programs for the conduct ofmore » hazard and operability reviews. Using flowsheets and study nodes as a road map for the review the need for layout and other detail drawings is all but eliminated. Using the 3D CAD model again for a post-P and ID HAZOP supports conformance to layout and safety requirements, provides superior visualization of the plant configuration and preserves the owners equity in the design. The response from the review teams are overwhelmingly in favor of this type of review over a review that uses only drawings. Over the long term the plant model serves more than just process hazards analysis. Ongoing use of the model can satisfy the required access to process safety information, OHSA documentation and other legal requirements. In this paper extensive instructions address the logic for the process hazards analysis and the preparation required to assist anyone who wishes to add the use of a 3D model to their review.« less
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Staley, Dennis M.
2014-01-01
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can produce dangerous flash floods and debris flows. In this report, empirical models are used to predict the probability and magnitude of debris-flow occurrence in response to a 10-year rainstorm for the 2013 Springs fire in Ventura County, California. Overall, the models predict a relatively high probability (60–80 percent) of debris flow for 9 of the 99 drainage basins in the burn area in response to a 10-year recurrence interval design storm. Predictions of debris-flow volume suggest that debris flows may entrain a significant volume of material, with 28 of the 99 basins identified as having potential debris-flow volumes greater than 10,000 cubic meters. These results of the relative combined hazard analysis suggest there is a moderate likelihood of significant debris-flow hazard within and downstream of the burn area for nearby populations, infrastructure, wildlife, and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National Weather Service-issued Debris Flow and Flash Flood Outlooks, Watches, and Warnings, and that residents adhere to any evacuation orders.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Probabilistic Seismic Hazard Assessment for a NPP in the Upper Rhine Graben, France
NASA Astrophysics Data System (ADS)
Clément, Christophe; Chartier, Thomas; Jomard, Hervé; Baize, Stéphane; Scotti, Oona; Cushing, Edward
2015-04-01
The southern part of the Upper Rhine Graben (URG) straddling the border between eastern France and western Germany, presents a relatively important seismic activity for an intraplate area. A magnitude 5 or greater shakes the URG every 25 years and in 1356 a magnitude greater than 6.5 struck the city of Basel. Several potentially active faults have been identified in the area and documented in the French Active Fault Database (web site in construction). These faults are located along the Graben boundaries and also inside the Graben itself, beneath heavily populated areas and critical facilities (including the Fessenheim Nuclear Power Plant). These faults are prone to produce earthquakes with magnitude 6 and above. Published regional models and preliminary geomorphological investigations provided provisional assessment of slip rates for the individual faults (0.1-0.001 mm/a) resulting in recurrence time of 10 000 years or greater for magnitude 6+ earthquakes. Using a fault model, ground motion response spectra are calculated for annual frequencies of exceedance (AFE) ranging from 10-4 to 10-8 per year, typical for design basis and probabilistic safety analyses of NPPs. A logic tree is implemented to evaluate uncertainties in seismic hazard assessment. The choice of ground motion prediction equations (GMPEs) and range of slip rate uncertainty are the main sources of seismic hazard variability at the NPP site. In fact, the hazard for AFE lower than 10-4 is mostly controlled by the potentially active nearby Rhine River fault. Compared with areal source zone models, a fault model localizes the hazard around the active faults and changes the shape of the Uniform Hazard Spectrum at the site. Seismic hazard deaggregations are performed to identify the earthquake scenarios (including magnitude, distance and the number of standard deviations from the median ground motion as predicted by GMPEs) that contribute to the exceedance of spectral acceleration for the different AFE levels. These scenarios are finally examined with respect to the seismicity data available in paleoseismic, historic and instrumental catalogues.
Development of a flood-induced health risk prediction model for Africa
NASA Astrophysics Data System (ADS)
Lee, D.; Block, P. J.
2017-12-01
Globally, many floods occur in developing or tropical regions where the impact on public health is substantial, including death and injury, drinking water, endemic disease, and so on. Although these flood impacts on public health have been investigated, integrated management of floods and flood-induced health risks is technically and institutionally limited. Specifically, while the use of climatic and hydrologic forecasts for disaster management has been highlighted, analogous predictions for forecasting the magnitude and impact of health risks are lacking, as is the infrastructure for health early warning systems, particularly in developing countries. In this study, we develop flood-induced health risk prediction model for African regions using season-ahead flood predictions with climate drivers and a variety of physical and socio-economic information, such as local hazard, exposure, resilience, and health vulnerability indicators. Skillful prediction of flood and flood-induced health risks can contribute to practical pre- and post-disaster responses in both local- and global-scales, and may eventually be integrated into multi-hazard early warning systems for informed advanced planning and management. This is especially attractive for areas with limited observations and/or little capacity to develop flood-induced health risk warning systems.
Microburst vertical wind estimation from horizontal wind measurements
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1994-01-01
The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.
Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments
NASA Astrophysics Data System (ADS)
Switzer, George F.; Proctor, Fred H.
2002-01-01
This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.
Voulgaris, George; Kumar, Nirnimesh; Warner, John C.; Leatherman, Stephen; Fletemeyer, John
2011-01-01
Rip current currents constitute one of the most common hazards in the nearshore that threaten the lives of the unaware public that makes recreational use of the coastal zone. Society responds to this danger through a number of measures that include: (a) the deployment of trained lifeguards; (b) public education related to the hidden hazards of the nearshore; and (c) establishment of warning systems.
Huang, Steven Y; Odisio, Bruno C; Sabir, Sharjeel H; Ensor, Joe E; Niekamp, Andrew S; Huynh, Tam T; Kroll, Michael; Gupta, Sanjay
2017-07-01
Our purpose was to develop a predictive model for short-term survival (i.e. <6 months) following inferior vena cava filter placement in patients with venous thromboembolism (VTE) and solid malignancy. Clinical and laboratory parameters were retrospectively reviewed for patients with solid malignancy who received a filter between January 2009 and December 2011 at a tertiary care cancer center. Multivariate Cox proportional hazards modeling was used to assess variables associated with 6 month survival following filter placement in patients with VTE and solid malignancy. Significant variables were used to generate a predictive model. 397 patients with solid malignancy received a filter during the study period. Three variables were associated with 6 month survival: (1) serum albumin [hazard ratio (HR) 0.496, P < 0.0001], (2) recent or planned surgery (<30 days) (HR 0.409, P < 0.0001), (3) TNM staging (stage 1 or 2 vs. stage 4, HR 0.177, P = 0.0001; stage 3 vs. stage 4, HR 0.367, P = 0.0002). These variables were used to develop a predictive model to estimate 6 month survival with an area under the receiver operating characteristic curve of 0.815, sensitivity of 0.782, and specificity of 0.715. Six month survival in patients with VTE and solid malignancy requiring filter placement can be predicted from three patient variables. Our predictive model could be used to help physicians decide whether a permanent or retrievable filter may be more appropriate as well as to assess the risks and benefits for filter retrieval within the context of survival longevity in patients with cancer.
Crundall, David; Kroll, Victoria
2018-05-18
Can hazard perception testing be useful for the emergency services? Previous research has found emergency response drivers' (ERDs) to perform better than controls, however these studies used clips of normal driving. In contrast, the current study filmed footage from a fire-appliance on blue-light training runs through Nottinghamshire, and endeavoured to discriminate between different groups of EDRs based on experience and collision risk. Thirty clips were selected to create two variants of the hazard perception test: a traditional push-button test requiring speeded-responses to hazards, and a prediction test that occludes at hazard onset and provides four possible outcomes for participants to choose between. Three groups of fire-appliance drivers (novices, low-risk experienced and high-risk experienced), and age-matched controls undertook both tests. The hazard perception test only discriminated between controls and all FA drivers, whereas the hazard prediction test was more sensitive, discriminating between high and low-risk experienced fire appliance drivers. Eye movement analyses suggest that the low-risk drivers were better at prioritising the hazardous precursors, leading to better predictive accuracy. These results pave the way for future assessment and training tools to supplement emergency response driver training, while supporting the growing literature that identifies hazard prediction as a more robust measure of driver safety than traditional hazard perception tests. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling Repeatable Events Using Discrete-Time Data: Predicting Marital Dissolution
ERIC Educational Resources Information Center
Teachman, Jay
2011-01-01
I join two methodologies by illustrating the application of multilevel modeling principles to hazard-rate models with an emphasis on procedures for discrete-time data that contain repeatable events. I demonstrate this application using data taken from the 1995 National Survey of Family Growth (NSFG) to ascertain the relationship between multiple…
Assessing Surface Fuel Hazard in Coastal Conifer Forests through the Use of LiDAR Remote Sensing
NASA Astrophysics Data System (ADS)
Koulas, Christos
The research problem that this thesis seeks to examine is a method of predicting conventional fire hazards using data drawn from specific regions, namely the Sooke and Goldstream watershed regions in coastal British Columbia. This thesis investigates whether LiDAR data can be used to describe conventional forest stand fire hazard classes. Three objectives guided this thesis: to discuss the variables associated with fire hazard, specifically the distribution and makeup of fuel; to examine the relationship between derived LiDAR biometrics and forest attributes related to hazard assessment factors defined by the Capitol Regional District (CRD); and to assess the viability of the LiDAR biometric decision tree in the CRD based on current frameworks for use. The research method uses quantitative datasets to assess the optimal generalization of these types of fire hazard data through discriminant analysis. Findings illustrate significant LiDAR-derived data limitations, and reflect the literature in that flawed field application of data modelling techniques has led to a disconnect between the ways in which fire hazard models have been intended to be used by scholars and the ways in which they are used by those tasked with prevention of forest fires. It can be concluded that a significant trade-off exists between computational requirements for wildfire simulation models and the algorithms commonly used by field teams to apply these models with remote sensing data, and that CRD forest management practices would need to change to incorporate a decision tree model in order to decrease risk.
Albuminuria and Rapid Loss of GFR and Risk of New Hip and Pelvic Fractures
Gao, Peggy; Clase, Catherine M.; Mente, Andrew; Mann, Johannes F.E.; Sleight, Peter; Yusuf, Salim; Teo, Koon K.
2013-01-01
Summary Background and objectives The microvascular circulation plays an important role in bone health. This study examines whether albuminuria, a marker of renal microvascular disease, is associated with incident hip and pelvic fractures. Design, setting, participants, & measurements This study reanalyzed data from the Ongoing Telmisartan Alone and in combination with Ramipril Global End Point Trial/Telmisartan Randomized Assessment Study in Angiotensin-Converting Enzyme Intolerant Subjects with Cardiovascular Disease trials, which examined the impact of renin angiotensin system blockade on cardiovascular outcomes (n=28,601). Albuminuria was defined as an albumin-to-creatinine ratio≥30 mg/g (n=4597). Cox proportional hazards models were used to determine the association of albuminuria with fracture risk adjusted for known risk factors for fractures, estimated GFR, and rapid decline in estimated GFR (≥5%/yr). Results There were 276 hip and pelvic fractures during a mean of 4.6 years of follow-up. Participants with baseline albuminuria had a significantly increased risk of fracture compared with participants without albuminuria (unadjusted hazard ratio=1.62 [1.22, 2.15], P<0.001; adjusted hazard ratio=1.36 [1.01, 1.84], P=0.05). A dose-dependent relationship was observed, with macroalbuminuria having a large fracture risk (unadjusted hazard ratio=2.01 [1.21, 3.35], P=0.007; adjusted hazard ratio=1.71 [1.007, 2.91], P=0.05) and microalbuminuria associating with borderline or no statistical significance (unadjusted hazard ratio=1.52 [1.10, 2.09], P=0.01; adjusted hazard ratio=1.28 [0.92, 1.78], P=0.15). Estimated GFR was not a predictor of fracture in any model, but rapid loss of estimated GFR over the first 2 years of follow-up predicted subsequent fracture (adjusted hazard ratio=1.47 [1.05, 2.04], P=0.02). Conclusions Albuminuria, especially macroalbuminuria, and rapid decline of estimated GFR predict hip and pelvic fractures. These findings support a theoretical model of a relationship between underlying causes of microalbuminuria and bone disease. PMID:23184565
NASA Astrophysics Data System (ADS)
Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique
2016-04-01
Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling. The methodology used in our study includes five main steps: (i) a landslide inventory was compiled through extraction of landslide occurrences in existing national databases (BDMvt, RTM), photointerpretation of aerial photographs and extensive field surveys; (ii) the main predisposing factors were identified and implemented as digital layers into a GIS together with the landslide inventory map, thus constituting the predictive variables to introduce into the model; (iii) a logistic regression model was applied to analyze the spatial and mathematical relationships between the response variable (i.e. absence/presence of landslides) and the set of predictive variables (i.e. predisposing factors), after a selection procedure based on statistical tests (χ2-test and Cramer's V coefficient); (iv) an evaluation of the model performance and quality results was conducted using a validation strategy based on ROC curve and AUC analyses; (v) a final susceptibility map in four classes was proposed using a discretization method based on success/prediction rate curves. The results of the susceptibility modelling were finally interpreted and discussed in the light of what was previously known about landslide occurrence and triggering in the study area. The major influence of the distance-to-streams variable on the model confirms the strong hillslope-channel coupling observed empirically during rainfall-induced landslide events.
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
A method for mapping flood hazard along roads.
Kalantari, Zahra; Nickman, Alireza; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart
2014-01-15
A method was developed for estimating and mapping flood hazard probability along roads using road and catchment characteristics as physical catchment descriptors (PCDs). The method uses a Geographic Information System (GIS) to derive candidate PCDs and then identifies those PCDs that significantly predict road flooding using a statistical modelling approach. The method thus allows flood hazards to be estimated and also provides insights into the relative roles of landscape characteristics in determining road-related flood hazards. The method was applied to an area in western Sweden where severe road flooding had occurred during an intense rain event as a case study to demonstrate its utility. The results suggest that for this case study area three categories of PCDs are useful for prediction of critical spots prone to flooding along roads: i) topography, ii) soil type, and iii) land use. The main drivers among the PCDs considered were a topographical wetness index, road density in the catchment, soil properties in the catchment (mainly the amount of gravel substrate) and local channel slope at the site of a road-stream intersection. These can be proposed as strong indicators for predicting the flood probability in ungauged river basins in this region, but some care is needed in generalising the case study results other potential factors are also likely to influence the flood hazard probability. Overall, the method proposed represents a straightforward and consistent way to estimate flooding hazards to inform both the planning of future roadways and the maintenance of existing roadways. Copyright © 2013 Elsevier Ltd. All rights reserved.
Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P
2017-05-22
PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age of 40. The PREDICT v2 is an improved prognostication and treatment benefit model compared with v1. The online version should continue to aid clinical decision making in women with early breast cancer.
iFLOOD: A Real Time Flood Forecast System for Total Water Modeling in the National Capital Region
NASA Astrophysics Data System (ADS)
Sumi, S. J.; Ferreira, C.
2017-12-01
Extreme flood events are the costliest natural hazards impacting the US and frequently cause extensive damages to infrastructure, disruption to economy and loss of lives. In 2016, Hurricane Matthew brought severe damage to South Carolina and demonstrated the importance of accurate flood hazard predictions that requires the integration of riverine and coastal model forecasts for total water prediction in coastal and tidal areas. The National Weather Service (NWS) and the National Ocean Service (NOS) provide flood forecasts for almost the entire US, still there are service-gap areas in tidal regions where no official flood forecast is available. The National capital region is vulnerable to multi-flood hazards including high flows from annual inland precipitation events and surge driven coastal inundation along the tidal Potomac River. Predicting flood levels on such tidal areas in river-estuarine zone is extremely challenging. The main objective of this study is to develop the next generation of flood forecast systems capable of providing accurate and timely information to support emergency management and response in areas impacted by multi-flood hazards. This forecast system is capable of simulating flood levels in the Potomac and Anacostia River incorporating the effects of riverine flooding from the upstream basins, urban storm water and tidal oscillations from the Chesapeake Bay. Flood forecast models developed so far have been using riverine data to simulate water levels for Potomac River. Therefore, the idea is to use forecasted storm surge data from a coastal model as boundary condition of this system. Final output of this validated model will capture the water behavior in river-estuary transition zone far better than the one with riverine data only. The challenge for this iFLOOD forecast system is to understand the complex dynamics of multi-flood hazards caused by storm surges, riverine flow, tidal oscillation and urban storm water. Automated system simulations will help to develop a seamless integration with the boundary systems in the service-gap area with new insights into our scientific understanding of such complex systems. A visualization system is being developed to allow stake holders and the community to have access to the flood forecasting for their region with sufficient lead time.
Identification and delineation of areas flood hazard using high accuracy of DEM data
NASA Astrophysics Data System (ADS)
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
NASA Astrophysics Data System (ADS)
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Chung, Su Jin; Lee, Yoonju; Oh, Jungsu S; Kim, Jae Seung; Lee, Phil Hyu; Sohn, Young H
2018-05-10
The present study aimed to investigate whether the level of presynaptic dopamine neuronal loss predicts future development of wearing-off in de novo Parkinson's disease. This retrospective cohort study included a total of 342 non-demented patients with de novo Parkinson's disease who underwent dopamine transporter positron emission tomography scans at their initial evaluation and received dopaminergic medications for 24 months or longer. Onset of wearing-off was determined based on patients' medical records at their outpatient clinic visits every 3-6 months. Predictive power of dopamine transporter activity in striatal subregions and other clinical factors for the development of wearing-off was evaluated by Cox proportional hazard models. During a median follow-up period of 50.2 ± 18.9 months, 69 patients (20.2%) developed wearing-off. Patients with wearing-off exhibited less dopamine transporter activity in the putamen, particularly the anterior and posterior putamens, compared to those without wearing-off. Multivariate Cox proportional hazard models revealed that dopamine transporter activities of the anterior (hazard ratio 0.556; p = 0.008) and whole putamens (hazard ratio 0.504; p = 0.025) were significant predictors of development of wearing-off. In addition, younger age at onset of Parkinson's disease, lower body weight, and a motor phenotype of postural instability/gait disturbance were also significant predictors for development of wearing-off. The present results provide in vivo evidence to support the hypothesis that presynaptic dopamine neuronal loss, particularly in the anterior putamen, leads to development of wearing-off in Parkinson's disease. Copyright © 2018. Published by Elsevier Ltd.
2006-07-01
Blue --) and NARAC (Red -) for two elevated releases ( MvM 3 and MvM 15) considered in the model-to-model study [2]. MvM 3 was a gas release (SF6...carried out under stable conditions with a boundary layer height of 100 m and release height of 80 m, while MvM 15 was a particle release carried out...release scenarios: MvM 3 at 30 and 60 Minutes and MvM 15 at 120 and 180 minutes. Each release shows significant NARAC underpredictions with
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, Cheryl J.; Plant, Nathaniel G.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.
Rejali, Mehri; Mansourian, Marjan; Babaei, Zohre; Eshrati, Babak
2017-01-01
In this study, we evaluated assessed elements connected with low birth weight (LBW) and used decision curve analysis (DCA) to define a scale to anticipate the probability of having a LBW newborn child. This hospital-based case-control study was led in Arak Hospital in Iran. The study included 470 mothers with LBW neonate and 470 mothers with natural neonates. Information were gathered by meeting moms utilizing preplanned organized questionnaire and from hospital records. The estimated probabilities of detecting LBW were calculated using the logistic regression and DCA to quantify the clinical consequences and its validation. Factors significantly associated with LBW were premature membrane rupture (odds ratio [OR] = 3.18 [1.882-5.384]), former LBW infants (OR = 2.99 [1.510-5.932]), premature pain (OR = 2.70 [1.659-4.415]), hypertension in pregnancy (OR = 2.39 [1.429-4.019]), last trimester of pregnancy bleeding (OR = 2.58 [1.018-6.583]), mother age >30 (OR = 2.17 [1.350-3.498]). However, with DCA, the prediction model made on these 15 variables has a net benefit (NB) of 0.3110 is best predictive with the highest NB. NB has simple clinical interpretation and utilizing the model is what might as well be called a procedure that distinguished what might as well be called 31.1 LBW per 100 cases with no superfluous recognize. It is conceivable to foresee LBW utilizing a prediction model show in light of noteworthy hazard components connected with LBW. The majority of the hazard elements for LBW are preventable, and moms can be alluded amid early pregnancy to a middle which is furnished with facilities for administration of high hazard pregnancy and LBW infant.
Rejali, Mehri; Mansourian, Marjan; Babaei, Zohre; Eshrati, Babak
2017-01-01
Background: In this study, we evaluated assessed elements connected with low birth weight (LBW) and used decision curve analysis (DCA) to define a scale to anticipate the probability of having a LBW newborn child. Methods: This hospital-based case–control study was led in Arak Hospital in Iran. The study included 470 mothers with LBW neonate and 470 mothers with natural neonates. Information were gathered by meeting moms utilizing preplanned organized questionnaire and from hospital records. The estimated probabilities of detecting LBW were calculated using the logistic regression and DCA to quantify the clinical consequences and its validation. Results: Factors significantly associated with LBW were premature membrane rupture (odds ratio [OR] = 3.18 [1.882–5.384]), former LBW infants (OR = 2.99 [1.510–5.932]), premature pain (OR = 2.70 [1.659–4.415]), hypertension in pregnancy (OR = 2.39 [1.429–4.019]), last trimester of pregnancy bleeding (OR = 2.58 [1.018–6.583]), mother age >30 (OR = 2.17 [1.350–3.498]). However, with DCA, the prediction model made on these 15 variables has a net benefit (NB) of 0.3110 is best predictive with the highest NB. NB has simple clinical interpretation and utilizing the model is what might as well be called a procedure that distinguished what might as well be called 31.1 LBW per 100 cases with no superfluous recognize. Conclusions: It is conceivable to foresee LBW utilizing a prediction model show in light of noteworthy hazard components connected with LBW. The majority of the hazard elements for LBW are preventable, and moms can be alluded amid early pregnancy to a middle which is furnished with facilities for administration of high hazard pregnancy and LBW infant. PMID:28928911
NASA Astrophysics Data System (ADS)
Li, Deying; Yin, Kunlong; Gao, Huaxi; Liu, Changchun
2009-10-01
Although the project of the Three Gorges Dam across the Yangtze River in China can utilize this huge potential source of hydroelectric power, and eliminate the loss of life and damage by flood, it also causes environmental problems due to the big rise and fluctuation of the water, such as geo-hazards. In order to prevent and predict geo-hazards, the establishment of prediction system of geo-hazards is very necessary. In order to implement functions of hazard prediction of regional and urban geo-hazard, single geo-hazard prediction, prediction of landslide surge and risk evaluation, logical layers of the system consist of data capturing layer, data manipulation and processing layer, analysis and application layer, and information publication layer. Due to the existence of multi-source spatial data, the research on the multi-source transformation and fusion data should be carried on in the paper. Its applicability of the system was testified on the spatial prediction of landslide hazard through spatial analysis of GIS in which information value method have been applied aims to identify susceptible areas that are possible to future landslide, on the basis of historical record of past landslide, terrain parameter, geology, rainfall and anthropogenic activity. Detailed discussion was carried out on spatial distribution characteristics of landslide hazard in the new town of Badong. These results can be used for risk evaluation. The system can be implemented as an early-warning and emergency management tool by the relevant authorities of the Three Gorges Reservoir in the future.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Objective rapid delineation of areas at risk from block-and-ash pyroclastic flows and surges
Widiwijayanti, C.; Voight, B.; Hidayat, D.; Schilling, S.P.
2009-01-01
Assessments of pyroclastic flow (PF) hazards are commonly based on mapping of PF and surge deposits and estimations of inundation limits, and/or computer models of varying degrees of sophistication. In volcanic crises a PF hazard map may be sorely needed, but limited time, exposures, or safety aspects may preclude fieldwork, and insufficient time or baseline data may be available for reliable dynamic simulations. We have developed a statistically constrained simulation model for block-and-ash type PFs to estimate potential areas of inundation by adapting methodology from Iverson et al. (Geol Soc America Bull 110:972-984, (1998) for lahars. The predictive equations for block-and-ash PFs are calibrated with data from several volcanoes and given by A = (0.05 to 0.1) V2/3, B = (35 to 40) V2/3, where A is cross-sectional area of inundation, B is planimetric area and V is deposit volume. The proportionality coefficients were obtained from regression analyses and comparison of simulations to mapped deposits. The method embeds the predictive equations in a GIS program coupled with DEM topography, using the LAHARZ program of Schilling (1998). Although the method is objective and reproducible, any PF hazard zone so computed should be considered as an approximate guide only, due to uncertainties on the coefficients applicable to individual PFs, the authenticity of DEM details, and the volume of future collapses. The statistical uncertainty of the predictive equations, which imply a factor of two or more in predicting A or B for a specified V, is superposed on the uncertainty of forecasting V for the next PF to descend a particular valley. Multiple inundation zones, produced by simulations using a selected range of volumes, partly accommodate these uncertainties. The resulting maps show graphically that PF inundation potentials are highest nearest volcano sources and along valley thalwegs, and diminish with distance from source and lateral distance from thalweg. The model does not explicitly consider dynamic behavior, which can be important. Ash-cloud surge impact limits must be extended beyond PF hazard zones and we provide several approaches to do this. The method has been used to supply PF and surge hazard maps in two crises: Merapi 2006; and Montserrat 2006-2007. ?? Springer-Verlag 2008.
Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C
2006-06-01
The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.
Evaluation of seismic hazard at the northwestern part of Egypt
NASA Astrophysics Data System (ADS)
Ezzelarab, M.; Shokry, M. M. F.; Mohamed, A. M. E.; Helal, A. M. A.; Mohamed, Abuoelela A.; El-Hadidy, M. S.
2016-01-01
The objective of this study is to evaluate the seismic hazard at the northwestern Egypt using the probabilistic seismic hazard assessment approach. The Probabilistic approach was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. The doubly-truncated exponential model was adopted for calculations of the recurrence parameters. Ground-motion prediction equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 0.2° × 0.2° covering the study area, seismic hazard curves for every node were calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to six spectral periods (0.1, 0.2, 0.3, 1.0, 2.0 and 3.0 s) for return periods of 72, 475 and 2475 years. The unified hazard spectra of two selected rock sites at Alexandria and Mersa Matruh Cities were provided. Finally, the hazard curves were de-aggregated to determine the sources that contribute most of hazard level of 10% probability of exceedance in 50 years for the mentioned selected sites.
The number of chemicals with limited toxicological information for chemical safety decision-making has accelerated alternative model development, which often are evaluated via referencing animal toxicology studies. In vivo studies are generally considered the standard for hazard ...
Toxicity data from laboratory rodents are widely available and frequently used in human health assessments as an animal model. We explore the possibility of using single rodent acute toxicity values to predict chemical toxicity to a diversity of wildlife species and to estimate ...
Morphodynamic data assimilation used to understand changing coasts
Plant, Nathaniel G.; Long, Joseph W.
2015-01-01
Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
Evaporation characteristics of ETBE-blended gasoline.
Okamoto, Katsuhiro; Hiramatsu, Muneyuki; Hino, Tomonori; Otake, Takuma; Okamoto, Takashi; Miyamoto, Hiroki; Honma, Masakatsu; Watanabe, Norimichi
2015-04-28
To reduce greenhouse gas emissions, which contribute to global warming, production of gasoline blended with ethyl tert-buthyl ether (ETBE) is increasing annually. The flash point of ETBE is higher than that of gasoline, and blending ETBE into gasoline will change the flash point and the vapor pressure. Therefore, it is expected that the fire hazard caused by ETBE-blended gasoline would differ from that caused by normal gasoline. The aim of this study was to acquire the knowledge required for estimating the fire hazard of ETBE-blended gasoline. Supposing that ETBE-blended gasoline was a two-component mixture of gasoline and ETBE, we developed a prediction model that describes the vapor pressure and flash point of ETBE-blended gasoline in an arbitrary ETBE blending ratio. We chose 8-component hydrocarbon mixture as a model gasoline, and defined the relation between molar mass of gasoline and mass loss fraction. We measured the changes in the vapor pressure and flash point of gasoline by blending ETBE and evaporation, and compared the predicted values with the measured values in order to verify the prediction model. The calculated values of vapor pressures and flash points corresponded well to the measured values. Thus, we confirmed that the change in the evaporation characteristics of ETBE-blended gasoline by evaporation could be predicted by the proposed model. Furthermore, the vapor pressure constants of ETBE-blended gasoline were obtained by the model, and then the distillation curves were developed. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
2002-01-01
ENSCO, Inc., developed the Meteorological and Atmospheric Real-time Safety Support (MARSS) system for real-time assessment of meteorological data displays and toxic material spills. MARSS also provides mock scenarios to guide preparations for emergencies involving meteorological hazards and toxic substances. Developed under a Small Business Innovation Research (SBIR) contract with Kennedy Space Center, MARSS was designed to measure how safe NASA and Air Force range safety personnel are while performing weather sensitive operations around launch pads. The system augments a ground operations safety plan that limits certain work operations to very specific weather conditions. It also provides toxic hazard prediction models to assist safety managers in planning for and reacting to releases of hazardous materials. MARSS can be used in agricultural, industrial, and scientific applications that require weather forecasts and predictions of toxic smoke movement. MARSS is also designed to protect urban areas, seaports, rail facilities, and airports from airborne releases of hazardous chemical substances. The system can integrate with local facility protection units and provide instant threat detection and assessment data that is reportable for local and national distribution.
Assessment and Prediction of Natural Hazards from Satellite Imagery
Gillespie, Thomas W.; Chu, Jasmine; Frankenberg, Elizabeth; Thomas, Duncan
2013-01-01
Since 2000, there have been a number of spaceborne satellites that have changed the way we assess and predict natural hazards. These satellites are able to quantify physical geographic phenomena associated with the movements of the earth’s surface (earthquakes, mass movements), water (floods, tsunamis, storms), and fire (wildfires). Most of these satellites contain active or passive sensors that can be utilized by the scientific community for the remote sensing of natural hazards over a number of spatial and temporal scales. The most useful satellite imagery for the assessment of earthquake damage comes from high-resolution (0.6 m to 1 m pixel size) passive sensors and moderate resolution active sensors that can quantify the vertical and horizontal movement of the earth’s surface. High-resolution passive sensors have been used to successfully assess flood damage while predictive maps of flood vulnerability areas are possible based on physical variables collected from passive and active sensors. Recent moderate resolution sensors are able to provide near real time data on fires and provide quantitative data used in fire behavior models. Limitations currently exist due to atmospheric interference, pixel resolution, and revisit times. However, a number of new microsatellites and constellations of satellites will be launched in the next five years that contain increased resolution (0.5 m to 1 m pixel resolution for active sensors) and revisit times (daily ≤ 2.5 m resolution images from passive sensors) that will significantly improve our ability to assess and predict natural hazards from space. PMID:25170186
NASA Astrophysics Data System (ADS)
Stenzel, S.; Baumann-Stanzer, K.
2009-04-01
Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). One of the main tasks of this project was 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation introduces the project models used and presents the results of task 2. The results of task 1 are presented by Baumann-Stanzer and Stenzel in this session. For the purpose of this study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Breeze (Trinity Consulting), SAFER System, SAM (Engineering office Lohmeyer), COMPAS. A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed in order to reliably predict and estimate the human exposure during the event. The models simulated the accidental release from the mentioned above gases and estimates the potential toxic areas. Since the inputs requirement differ from model to model, and the outputs are based on different criteria for toxic areas and exposure, a high degree of caution in the interpretation of the model results is needed.
Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters
Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.
2014-01-01
The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Generation of a mixture model ground-motion prediction equation for Northern Chile
NASA Astrophysics Data System (ADS)
Haendel, A.; Kuehn, N. M.; Scherbaum, F.
2012-12-01
In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.
Staley, Dennis M.
2013-01-01
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can produce dangerous flash floods and debris flows. In this report, empirical models are used to predict the probability and magnitude of debris-flow occurrence in response to a 10-year rainstorm for the 2013 Rim fire in Yosemite National Park and the Stanislaus National Forest, California. Overall, the models predict a relatively high probability (60–80 percent) of debris flow for 28 of the 1,238 drainage basins in the burn area in response to a 10-year recurrence interval design storm. Predictions of debris-flow volume suggest that debris flows may entrain a significant volume of material, with 901 of the 1,238 basins identified as having potential debris-flow volumes greater than 10,000 cubic meters. These results of the relative combined hazard analysis suggest there is a moderate likelihood of significant debris-flow hazard within and downstream of the burn area for nearby populations, infrastructure, wildlife, and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National-Weather-Service-issued Debris Flow and Flash Flood Outlooks, Watches and Warnings and that residents adhere to any evacuation orders.
New techniques on oil spill modelling applied in the Eastern Mediterranean sea
NASA Astrophysics Data System (ADS)
Zodiatis, George; Kokinou, Eleni; Alves, Tiago; Lardner, Robin
2016-04-01
Small or large oil spills resulting from accidents on oil and gas platforms or due to the maritime traffic comprise a major environmental threat for all marine and coastal systems, and they are responsible for huge economic losses concerning the human infrastructures and the tourism. This work aims at presenting the integration of oil-spill model, bathymetric, meteorological, oceanographic, geomorphological and geological data to assess the impact of oil spills in maritime regions such as bays, as well as in the open sea, carried out in the Eastern Mediterranean Sea within the frame of NEREIDs, MEDESS-4MS and RAOP-Med EU projects. The MEDSLIK oil spill predictions are successfully combined with bathymetric analyses, the shoreline susceptibility and hazard mapping to predict the oil slick trajectories and the extend of the coastal areas affected. Based on MEDSLIK results, oil spill spreading and dispersion scenarios are produced both for non-mitigated and mitigated oil spills. MEDSLIK model considers three response combating methods of floating oil spills: a) mechanical recovery using skimmers or similar mechanisms; b) destruction by fire, c) use of dispersants or other bio-chemical means and deployment of booms. Shoreline susceptibility map can be compiled for the study areas based on the Environmental Susceptibility Index. The ESI classification considers a range of values between 1 and 9, with level 1 (ESI 1) representing areas of low susceptibility, impermeable to oil spilt during accidents, such as linear shorelines with rocky cliffs. In contrast, ESI 9 shores are highly vulnerable, and often coincide with natural reserves and special protected areas. Additionally, hazard maps of the maritime and coastal areas, possibly exposed to the danger on an oil spill, evaluate and categorize the hazard in levels from low to very high. This is important because a) Prior to an oil spill accident, hazard and shoreline susceptibility maps are made available to design preparedness and prevention plans in an effective way, b) After an oil spill accident, oil spill predictions can be combined with hazard maps to provide information on the oil spill dispersion and their impacts. This way, prevention plans can be directly modified at any time after the accident.
ScienceCast 121: The Effects of Space Weather on Aviation
2013-10-25
Ordinary air travelers can be exposed to significant doses of radiation during solar storms. A new computer model developed by NASA aims to help protect the public by predicting space weather hazards to aviation.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
Landslide Hazard Probability Derived from Inherent and Dynamic Determinants
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan
2016-04-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach combines an empirical inherent landslide probability with a numerical dynamic probability, generated by combining routed recharge from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model run in a Monte Carlo simulation. Landslide hazard mapping is advanced by adjusting the dynamic model of stability with an empirically-based scalar representing the inherent stability of the landscape, creating a probabilistic quantitative measure of geohazard prediction at a 30-m resolution. Climatology, soil, and topography control the dynamic nature of hillslope stability and the empirical information further improves the discriminating ability of the integrated model. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex, a rugged terrain with nearly 2,700 m (9,000 ft) of vertical relief, covering 2757 sq km (1064 sq mi) in northern Washington State, U.S.A.
Hazard Regression Models of Early Mortality in Trauma Centers
Clark, David E; Qian, Jing; Winchell, Robert J; Betensky, Rebecca A
2013-01-01
Background Factors affecting early hospital deaths after trauma may be different from factors affecting later hospital deaths, and the distribution of short and long prehospital times may vary among hospitals. Hazard regression (HR) models may therefore be more useful than logistic regression (LR) models for analysis of trauma mortality, especially when treatment effects at different time points are of interest. Study Design We obtained data for trauma center patients from the 2008–9 National Trauma Data Bank (NTDB). Cases were included if they had complete data for prehospital times, hospital times, survival outcome, age, vital signs, and severity scores. Cases were excluded if pulseless on admission, transferred in or out, or ISS<9. Using covariates proposed for the Trauma Quality Improvement Program and an indicator for each hospital, we compared LR models predicting survival at 8 hours after injury to HR models with survival censored at 8 hours. HR models were then modified to allow time-varying hospital effects. Results 85,327 patients in 161 hospitals met inclusion criteria. Crude hazards peaked initially, then steadily declined. When hazard ratios were assumed constant in HR models, they were similar to odds ratios in LR models associating increased mortality with increased age, firearm mechanism, increased severity, more deranged physiology, and estimated hospital-specific effects. However, when hospital effects were allowed to vary by time, HR models demonstrated that hospital outliers were not the same at different times after injury. Conclusions HR models with time-varying hazard ratios reveal inconsistencies in treatment effects, data quality, and/or timing of early death among trauma centers. HR models are generally more flexible than LR models, can be adapted for censored data, and potentially offer a better tool for analysis of factors affecting early death after injury. PMID:23036828
High-Throughput Models for Exposure-Based Chemical ...
The United States Environmental Protection Agency (U.S. EPA) must characterize potential risks to human health and the environment associated with manufacture and use of thousands of chemicals. High-throughput screening (HTS) for biological activity allows the ToxCast research program to prioritize chemical inventories for potential hazard. Similar capabilities for estimating exposure potential would support rapid risk-based prioritization for chemicals with limited information; here, we propose a framework for high-throughput exposure assessment. To demonstrate application, an analysis was conducted that predicts human exposure potential for chemicals and estimates uncertainty in these predictions by comparison to biomonitoring data. We evaluated 1936 chemicals using far-field mass balance human exposure models (USEtox and RAIDAR) and an indicator for indoor and/or consumer use. These predictions were compared to exposures inferred by Bayesian analysis from urine concentrations for 82 chemicals reported in the National Health and Nutrition Examination Survey (NHANES). Joint regression on all factors provided a calibrated consensus prediction, the variance of which serves as an empirical determination of uncertainty for prioritization on absolute exposure potential. Information on use was found to be most predictive; generally, chemicals above the limit of detection in NHANES had consumer/indoor use. Coupled with hazard HTS, exposure HTS can place risk earlie
NASA Technical Reports Server (NTRS)
Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.
1973-01-01
The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.
Risk score to predict the outcome of patients with cerebral vein and dural sinus thrombosis.
Ferro, José M; Bacelar-Nicolau, Helena; Rodrigues, Teresa; Bacelar-Nicolau, Leonor; Canhão, Patrícia; Crassard, Isabelle; Bousser, Marie-Germaine; Dutra, Aurélio Pimenta; Massaro, Ayrton; Mackowiack-Cordiolani, Marie-Anne; Leys, Didier; Fontes, João; Stam, Jan; Barinagarrementeria, Fernando
2009-01-01
Around 15% of patients die or become dependent after cerebral vein and dural sinus thrombosis (CVT). We used the International Study on Cerebral Vein and Dural Sinus Thrombosis (ISCVT) sample (624 patients, with a median follow-up time of 478 days) to develop a Cox proportional hazards regression model to predict outcome, dichotomised by a modified Rankin Scale score >2. From the model hazard ratios, a risk score was derived and a cut-off point selected. The model and the score were tested in 2 validation samples: (1) the prospective Cerebral Venous Thrombosis Portuguese Collaborative Study Group (VENOPORT) sample with 91 patients; (2) a sample of 169 consecutive CVT patients admitted to 5 ISCVT centres after the end of the ISCVT recruitment period. Sensitivity, specificity, c statistics and overall efficiency to predict outcome at 6 months were calculated. The model (hazard ratios: malignancy 4.53; coma 4.19; thrombosis of the deep venous system 3.03; mental status disturbance 2.18; male gender 1.60; intracranial haemorrhage 1.42) had overall efficiencies of 85.1, 84.4 and 90.0%, in the derivation sample and validation samples 1 and 2, respectively. Using the risk score (range from 0 to 9) with a cut-off of >or=3 points, overall efficiency was 85.4, 84.4 and 90.1% in the derivation sample and validation samples 1 and 2, respectively. Sensitivity and specificity in the combined samples were 96.1 and 13.6%, respectively. The CVT risk score has a good estimated overall rate of correct classifications in both validation samples, but its specificity is low. It can be used to avoid unnecessary or dangerous interventions in low-risk patients, and may help to identify high-risk CVT patients. (c) 2009 S. Karger AG, Basel.
Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval
2018-02-26
Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.
Pfeiffer, Ruth M.; Miglioretti, Diana L.; Kerlikowske, Karla; Tice, Jeffery; Vacek, Pamela M.; Gierach, Gretchen L.
2016-01-01
Purpose Breast cancer risk prediction models are used to plan clinical trials and counsel women; however, relationships of predicted risks of breast cancer incidence and prognosis after breast cancer diagnosis are unknown. Methods Using largely pre-diagnostic information from the Breast Cancer Surveillance Consortium (BCSC) for 37,939 invasive breast cancers (1996–2007), we estimated 5-year breast cancer risk (<1%; 1–1.66%; ≥1.67%) with three models: BCSC 1-year risk model (BCSC-1; adapted to 5-year predictions); Breast Cancer Risk Assessment Tool (BCRAT); and BCSC 5-year risk model (BCSC-5). Breast cancer-specific mortality post-diagnosis (range: 1–13 years; median: 5.4–5.6 years) was related to predicted risk of developing breast cancer using unadjusted Cox proportional hazards models, and in age-stratified (35–44; 45–54; 55–69; 70–89 years) models adjusted for continuous age, BCSC registry, calendar period, income, mode of presentation, stage and treatment. Mean age at diagnosis was 60 years. Results Of 6,021 deaths, 2,993 (49.7%) were ascribed to breast cancer. In unadjusted case-only analyses, predicted breast cancer risk ≥1.67% versus <1.0% was associated with lower risk of breast cancer death; BCSC-1: hazard ratio (HR) = 0.82 (95% CI = 0.75–0.90); BCRAT: HR = 0.72 (95% CI = 0.65–0.81) and BCSC-5: HR = 0.84 (95% CI = 0.75–0.94). Age-stratified, adjusted models showed similar, although mostly non-significant HRs. Among women ages 55–69 years, HRs approximated 1.0. Generally, higher predicted risk was inversely related to percentages of cancers with unfavorable prognostic characteristics, especially among women 35–44 years. Conclusions Among cases assessed with three models, higher predicted risk of developing breast cancer was not associated with greater risk of breast cancer death; thus, these models would have limited utility in planning studies to evaluate breast cancer mortality reduction strategies. Further, when offering women counseling, it may be useful to note that high predicted risk of developing breast cancer does not imply that if cancer develops it will behave aggressively. PMID:27560501
NASA Astrophysics Data System (ADS)
Lundquist, J. K.; Sugiyama, G.; Nasstrom, J.
2007-12-01
This presentation describes the tools and services provided by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL) for modeling the impacts of airborne hazardous materials. NARAC provides atmospheric plume modeling tools and services for chemical, biological, radiological, and nuclear airborne hazards. NARAC can simulate downwind effects from a variety of scenarios, including fires, industrial and transportation accidents, radiation dispersal device explosions, hazardous material spills, sprayers, nuclear power plant accidents, and nuclear detonations. NARAC collaborates on radiological dispersion source terms and effects models with Sandia National Laboratories and the U.S. Nuclear Regulatory Commission. NARAC was designated the interim provider of capabilities for the Department of Homeland Security's Interagency Modeling and Atmospheric Assessment Center by the Homeland Security Council in April 2004. The NARAC suite of software tools include simple stand-alone, local-scale plume modeling tools for end-user's computers, and Web- and Internet-based software to access advanced modeling tools and expert analyses from the national center at LLNL. Initial automated, 3-D predictions of plume exposure limits and protective action guidelines for emergency responders and managers are available from the center in 5-10 minutes. These can be followed immediately by quality-assured, refined analyses by 24 x 7 on-duty or on-call NARAC staff. NARAC continues to refine calculations using updated on-scene information, including measurements, until all airborne releases have stopped and the hazardous threats are mapped and impacts assessed. Model predictions include the 3-D spatial and time-varying effects of weather, land use, and terrain, on scales from the local to regional to global. Real-time meteorological data and forecasts are provided by redundant communications links to the U.S. National Oceanic and Atmospheric Administration (NOAA), U.S. Navy, and U.S. Air Force, as well as an in-house mesoscale numerical weather prediction model. NARAC provides an easy-to-use Geographical Information System (GIS) for display of plume predictions with affected population counts and detailed maps, and the ability to export plume predictions to other standard GIS capabilities. Data collection and product distribution is provided through a variety of communication methods, including dial-up, satellite, and wired and wireless networks. Ongoing research and development activities will be highlighted. The NARAC scientific support team is developing urban parameterizations for use in a regional dispersion model (see companion paper by Delle Monache). Modifications to the numerical weather prediction model WRF to account for characteristics of urban dynamics are also in progress, as is boundary-layer turbulence model development for simulations with resolutions greater than 1km. The NARAC building-resolving computational fluid dynamics capability, FEM3MP, enjoys ongoing development activities such as the expansion of its ability to model releases of dense gases. Other research activities include sensor-data fusion, such as the reconstruction of unknown source terms from sparse and disparate observations. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The Department of Homeland Security sponsored the production of this material under the Department of Energy contract for the management and operation of Lawrence Livermore National Laboratory. UCRL-PROC-234355
Spatially explicit shallow landslide susceptibility mapping over large areas
Bellugi, Dino; Dietrich, William E.; Stock, Jonathan D.; McKean, Jim; Kazian, Brian; Hargrove, Paul
2011-01-01
Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so it has generated downscaled precipitation maps for the storm. To predict the corresponding pattern of shallow landslide susceptibility across the state, we have used the model Shalstab (a coupled steady state runoff and infinite slope stability model) which susceptibility spatially explicit estimates of relative potential instability. Such slope stability models that include the effects of subsurface runoff on potentially destabilizing pore pressure evolution require water routing and hence the definition of upslope drainage area to each potential cell. To calculate drainage area efficiently over a large area we developed a parallel framework to scale-up Shalstab and specifically introduce a new efficient parallel drainage area algorithm which produces seamless results. The single seamless shallow landslide susceptibility map for all of California was accomplished in a short run time, and indicates that much larger areas can be efficiently modelled. As landslide maps generally over predict the extent of instability for any given storm. Local empirical data on the fraction of predicted unstable cells that failed for observed rainfall intensity can be used to specify the likely extent of hazard for a given storm. This suggests that campaigns to collect local precipitation data and detailed shallow landslide location maps after major storms could be used to calibrate models and improve their use in hazard assessment for individual storms.
Modeling and mitigating natural hazards: Stationarity is immortal!
NASA Astrophysics Data System (ADS)
Montanari, Alberto; Koutsoyiannis, Demetris
2014-12-01
Environmental change is a reason of relevant concern as it is occurring at an unprecedented pace and might increase natural hazards. Moreover, it is deemed to imply a reduced representativity of past experience and data on extreme hydroclimatic events. The latter concern has been epitomized by the statement that "stationarity is dead." Setting up policies for mitigating natural hazards, including those triggered by floods and droughts, is an urgent priority in many countries, which implies practical activities of management, engineering design, and construction. These latter necessarily need to be properly informed, and therefore, the research question on the value of past data is extremely important. We herein argue that there are mechanisms in hydrological systems that are time invariant, which may need to be interpreted through data inference. In particular, hydrological predictions are based on assumptions which should include stationarity. In fact, any hydrological model, including deterministic and nonstationary approaches, is affected by uncertainty and therefore should include a random component that is stationary. Given that an unnecessary resort to nonstationarity may imply a reduction of predictive capabilities, a pragmatic approach, based on the exploitation of past experience and data is a necessary prerequisite for setting up mitigation policies for environmental risk.
NASA Astrophysics Data System (ADS)
Brooks, E. M.; Stein, S.; Spencer, B. D.; Salditch, L.; Petersen, M. D.; McNamara, D. E.
2017-12-01
Seismicity in the central United States has dramatically increased since 2008 due to the injection of wastewater produced by oil and gas extraction. In response, the USGS created a one-year probabilistic hazard model and map for 2016 to describe the increased hazard posed to the central and eastern United States. Using the intensity of shaking reported to the "Did You Feel It?" system during 2016, we assess the performance of this model. Assessing the performance of earthquake hazard maps for natural and induced seismicity is conceptually similar but has practical differences. Maps that have return periods of hundreds or thousands of years— as commonly used for natural seismicity— can be assessed using historical intensity data that also span hundreds or thousands of years. Several different features stand out when assessing the USGS 2016 seismic hazard model for the central and eastern United States from induced and natural earthquakes. First, the model can be assessed as a forecast in one year, because event rates are sufficiently high to permit evaluation with one year of data. Second, because these models are projections from the previous year thus implicitly assuming that fluid injection rates remain the same, misfit may reflect changes in human activity. Our results suggest that the model was very successful by the metric implicit in probabilistic hazard seismic assessment: namely, that the fraction of sites at which the maximum shaking exceeded the mapped value is comparable to that expected. The model also did well by a misfit metric that compares the spatial patterns of predicted and maximum observed shaking. This was true for both the central and eastern United States as a whole, and for the region within it with the highest amount of seismicity, Oklahoma and its surrounding area. The model performed least well in northern Texas, over-stating hazard, presumably because lower oil and gas prices and regulatory action reduced the water injection volume relative to the previous year. These results imply that such hazard maps have the potential to be valuable tools for policy makers and regulators in managing the seismic risks associated with unconventional oil and gas production.
Bevilacqua, Antonio; Speranza, Barbara; Sinigaglia, Milena; Corbo, Maria Rosaria
2015-01-01
Predictive Microbiology (PM) deals with the mathematical modeling of microorganisms in foods for different applications (challenge test, evaluation of microbiological shelf life, prediction of the microbiological hazards connected with foods, etc.). An interesting and important part of PM focuses on the use of primary functions to fit data of death kinetics of spoilage, pathogenic, and useful microorganisms following thermal or non-conventional treatments and can also be used to model survivors throughout storage. The main topic of this review is a focus on the most important death models (negative Gompertz, log-linear, shoulder/tail, Weibull, Weibull+tail, re-parameterized Weibull, biphasic approach, etc.) to pinpoint the benefits and the limits of each model; in addition, the last section addresses the most important tools for the use of death kinetics and predictive microbiology in a user-friendly way. PMID:28231222
O'Brien, Catherine; Blanchard, Laurie A; Cadarette, Bruce S; Endrusick, Thomas L; Xu, Xiaojiang; Berglund, Larry G; Sawka, Michael N; Hoyt, Reed W
2011-10-01
Personal protective equipment (PPE) refers to clothing and equipment designed to protect individuals from chemical, biological, radiological, nuclear, and explosive hazards. The materials used to provide this protection may exacerbate thermal strain by limiting heat and water vapor transfer. Any new PPE must therefore be evaluated to ensure that it poses no greater thermal strain than the current standard for the same level of hazard protection. This review describes how such evaluations are typically conducted. Comprehensive evaluation of PPE begins with a biophysical assessment of materials using a guarded hot plate to determine the thermal characteristics (thermal resistance and water vapor permeability). These characteristics are then evaluated on a thermal manikin wearing the PPE, since thermal properties may change once the materials have been constructed into a garment. These data may be used in biomedical models to predict thermal strain under a variety of environmental and work conditions. When the biophysical data indicate that the evaporative resistance (ratio of permeability to insulation) is significantly better than the current standard, the PPE is evaluated through human testing in controlled laboratory conditions appropriate for the conditions under which the PPE would be used if fielded. Data from each phase of PPE evaluation are used in predictive models to determine user guidelines, such as maximal work time, work/rest cycles, and fluid intake requirements. By considering thermal stress early in the development process, health hazards related to temperature extremes can be mitigated while maintaining or improving the effectiveness of the PPE for protection from external hazards.
Prior nonhip limb fracture predicts subsequent hip fracture in institutionalized elderly people.
Nakamura, K; Takahashi, S; Oyama, M; Oshiki, R; Kobayashi, R; Saito, T; Yoshizawa, Y; Tsuchiya, Y
2010-08-01
This 1-year cohort study of nursing home residents revealed that historical fractures of upper limbs or nonhip lower limbs were associated with hip fracture (hazard ratio = 2.14), independent of activities of daily living (ADL), mobility, dementia, weight, and type of nursing home. Prior nonhip fractures are useful for predicting of hip fracture in institutional settings. The aim of this study was to evaluate the utility of fracture history for the prediction of hip fracture in nursing home residents. This was a cohort study with a 1-year follow-up. Subjects were 8,905 residents of nursing homes in Niigata, Japan (mean age, 84.3 years). Fracture histories were obtained from nursing home medical records. ADL levels were assessed by caregivers. Hip fracture diagnosis was based on hospital medical records. Subjects had fracture histories of upper limbs (5.0%), hip (14.0%), and nonhip lower limbs (4.6%). Among historical single fractures, only prior nonhip lower limbs significantly predicted subsequent fracture (adjusted hazard ratio, 2.43; 95% confidence interval (CI), 1.30-4.57). The stepwise method selected the best model, in which a combined historical fracture at upper limbs or nonhip lower limbs (adjusted hazard ratio, 2.14; 95% CI, 1.30-3.52), dependence, ADL levels, mobility, dementia, weight, and type of nursing home independently predicted subsequent hip fracture. A fracture history at upper or nonhip lower limbs, in combination with other known risk factors, is useful for the prediction of future hip fracture in institutional settings.
Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular
NASA Astrophysics Data System (ADS)
Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.
2015-12-01
The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had meaningful result with artificial and natural effect. It is expected to predict future forest fire risk with future climate variables as the climate changes.
Khosravi, Khabat; Pham, Binh Thai; Chapi, Kamran; Shirzadi, Ataollah; Shahabi, Himan; Revhaug, Inge; Prakash, Indra; Tien Bui, Dieu
2018-06-15
Floods are one of the most damaging natural hazards causing huge loss of property, infrastructure and lives. Prediction of occurrence of flash flood locations is very difficult due to sudden change in climatic condition and manmade factors. However, prior identification of flood susceptible areas can be done with the help of machine learning techniques for proper timely management of flood hazards. In this study, we tested four decision trees based machine learning models namely Logistic Model Trees (LMT), Reduced Error Pruning Trees (REPT), Naïve Bayes Trees (NBT), and Alternating Decision Trees (ADT) for flash flood susceptibility mapping at the Haraz Watershed in the northern part of Iran. For this, a spatial database was constructed with 201 present and past flood locations and eleven flood-influencing factors namely ground slope, altitude, curvature, Stream Power Index (SPI), Topographic Wetness Index (TWI), land use, rainfall, river density, distance from river, lithology, and Normalized Difference Vegetation Index (NDVI). Statistical evaluation measures, the Receiver Operating Characteristic (ROC) curve, and Freidman and Wilcoxon signed-rank tests were used to validate and compare the prediction capability of the models. Results show that the ADT model has the highest prediction capability for flash flood susceptibility assessment, followed by the NBT, the LMT, and the REPT, respectively. These techniques have proven successful in quickly determining flood susceptible areas. Copyright © 2018 Elsevier B.V. All rights reserved.
Lin, Lei; Wang, Qian; Sadek, Adel W
2016-06-01
The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean absolute percentage error (MAPE). Copyright © 2016 Elsevier Ltd. All rights reserved.
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
NASA Astrophysics Data System (ADS)
Chapman, Martin Colby
1998-12-01
The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.
Modeling population exposures to outdoor sources of hazardous air pollutants.
Ozkaynak, Halûk; Palma, Ted; Touma, Jawad S; Thurman, James
2008-01-01
Accurate assessment of human exposures is an important part of environmental health effects research. However, most air pollution epidemiology studies rely upon imperfect surrogates of personal exposures, such as information based on available central-site outdoor concentration monitoring or modeling data. In this paper, we examine the limitations of using outdoor concentration predictions instead of modeled personal exposures for over 30 gaseous and particulate hazardous air pollutants (HAPs) in the US. The analysis uses the results from an air quality dispersion model (the ASPEN or Assessment System for Population Exposure Nationwide model) and an inhalation exposure model (the HAPEM or Hazardous Air Pollutant Exposure Model, Version 5), applied by the US. Environmental protection Agency during the 1999 National Air Toxic Assessment (NATA) in the US. Our results show that the total predicted chronic exposure concentrations of outdoor HAPs from all sources are lower than the modeled ambient concentrations by about 20% on average for most gaseous HAPs and by about 60% on average for most particulate HAPs (mainly, due to the exclusion of indoor sources from our modeling analysis and lower infiltration of particles indoors). On the other hand, the HAPEM/ASPEN concentration ratio averages for onroad mobile source exposures were found to be greater than 1 (around 1.20) for most mobile-source related HAPs (e.g. 1, 3-butadiene, acetaldehyde, benzene, formaldehyde) reflecting the importance of near-roadway and commuting environments on personal exposures to HAPs. The distribution of the ratios of personal to ambient concentrations was found to be skewed for a number of the VOCs and reactive HAPs associated with major source emissions, indicating the importance of personal mobility factors. We conclude that the increase in personal exposures from the corresponding predicted ambient levels tends to occur near locations where there are either major emission sources of HAPs or when individuals are exposed to either on- or nonroad sources of HAPs during their daily activities. These findings underscore the importance of applying exposure-modeling methods, which incorporate information on time-activity, commuting, and exposure factors data, for the purposes of assigning exposures in air pollution health studies.
Xu, Haoming; Moni, Mohammad Ali; Liò, Pietro
2015-12-01
In cancer genomics, gene expression levels provide important molecular signatures for all types of cancer, and this could be very useful for predicting the survival of cancer patients. However, the main challenge of gene expression data analysis is high dimensionality, and microarray is characterised by few number of samples with large number of genes. To overcome this problem, a variety of penalised Cox proportional hazard models have been proposed. We introduce a novel network regularised Cox proportional hazard model and a novel multiplex network model to measure the disease comorbidities and to predict survival of the cancer patient. Our methods are applied to analyse seven microarray cancer gene expression datasets: breast cancer, ovarian cancer, lung cancer, liver cancer, renal cancer and osteosarcoma. Firstly, we applied a principal component analysis to reduce the dimensionality of original gene expression data. Secondly, we applied a network regularised Cox regression model on the reduced gene expression datasets. By using normalised mutual information method and multiplex network model, we predict the comorbidities for the liver cancer based on the integration of diverse set of omics and clinical data, and we find the diseasome associations (disease-gene association) among different cancers based on the identified common significant genes. Finally, we evaluated the precision of the approach with respect to the accuracy of survival prediction using ROC curves. We report that colon cancer, liver cancer and renal cancer share the CXCL5 gene, and breast cancer, ovarian cancer and renal cancer share the CCND2 gene. Our methods are useful to predict survival of the patient and disease comorbidities more accurately and helpful for improvement of the care of patients with comorbidity. Software in Matlab and R is available on our GitHub page: https://github.com/ssnhcom/NetworkRegularisedCox.git. Copyright © 2015. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Tai, H.; Wilson, J. W.; Maiden, D. L.
2003-01-01
The atmospheric ionizing radiation (AIR) ER-2 preflight analysis, one of the first attempts to obtain a relatively complete measurement set of the high-altitude radiation level environment, is described in this paper. The primary thrust is to characterize the atmospheric radiation and to define dose levels at high-altitude flight. A secondary thrust is to develop and validate dosimetric techniques and monitoring devices for protecting aircrews. With a few chosen routes, we can measure the experimental results and validate the AIR model predictions. Eventually, as more measurements are made, we gain more understanding about the hazardous radiation environment and acquire more confidence in the prediction models.
A New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
Correlates of AUDIT risk status for male and female college students.
Demartini, Kelly S; Carey, Kate B
2009-01-01
The current study identified gender-specific correlates of hazardous drinker status as defined by the AUDIT. A total of 462 college student volunteers completed the study in 2006. The sample was predominantly Caucasian (75%) and female (55%). Participants completed a survey assessing demographics, alcohol use patterns, and health indices. Scores of 8 or more on the AUDIT defined the at-risk subsample. Logistic regression models determined which variables predicted AUDIT risk status for men and women. The at-risk participants reported higher alcohol use and related problems, elevated sleep problems and lower health ratings. High typical blood alcohol concentration (BAC), lifetime drug use, and psychosocial problems predicted risk status for males. Binge frequency and psychosocial problems predicted risk status for females. Different behavioral profiles emerged for men and women identified as hazardous drinkers on the AUDIT. The efficacy of brief alcohol interventions could be enhanced by addressing these behavioral correlates.
Gartner, Joseph E.; Cannon, Susan H.; Santi, Paul M
2014-01-01
Debris flows and sediment-laden floods in the Transverse Ranges of southern California pose severe hazards to nearby communities and infrastructure. Frequent wildfires denude hillslopes and increase the likelihood of these hazardous events. Debris-retention basins protect communities and infrastructure from the impacts of debris flows and sediment-laden floods and also provide critical data for volumes of sediment deposited at watershed outlets. In this study, we supplement existing data for the volumes of sediment deposited at watershed outlets with newly acquired data to develop new empirical models for predicting volumes of sediment produced by watersheds located in the Transverse Ranges of southern California. The sediment volume data represent a broad sample of conditions found in Ventura, Los Angeles and San Bernardino Counties, California. The measured volumes of sediment, watershed morphology, distributions of burn severity within each watershed, the time since the most recent fire, triggering storm rainfall conditions, and engineering soil properties were analyzed using multiple linear regressions to develop two models. A “long-term model” was developed for predicting volumes of sediment deposited by both debris flows and floods at various times since the most recent fire from a database of volumes of sediment deposited by a combination of debris flows and sediment-laden floods with no time limit since the most recent fire (n = 344). A subset of this database was used to develop an “emergency assessment model” for predicting volumes of sediment deposited by debris flows within two years of a fire (n = 92). Prior to developing the models, 32 volumes of sediment, and related parameters for watershed morphology, burn severity and rainfall conditions were retained to independently validate the long-term model. Ten of these volumes of sediment were deposited by debris flows within two years of a fire and were used to validate the emergency assessment model. The models were validated by comparing predicted and measured volumes of sediment. These validations were also performed for previously developed models and identify that the models developed here best predict volumes of sediment for burned watersheds in comparison to previously developed models.
Multivariate Models for Prediction of Human Skin Sensitization Hazard.
One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensiti...
Evaluation of methods for predicting rail-highway crossing hazards.
DOT National Transportation Integrated Search
1986-01-01
The need for improvement at a rail/highway crossing typically is based on the Expected Accident Rate (EAR) in conjunction with other criteria carrying lesser weight. In recent years new models for assessing the need for improvements have been develop...
Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi
2017-12-01
Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.
Boorjian, Stephen
2014-08-01
Although the kidney is a primary organ for vitamin D metabolism, the association between vitamin D and renal cell cancer (RCC) remains unclear. We prospectively evaluated the association between predicted plasma 25-hydroxyvitamin D [25(OH)D] and RCC risk among 72,051 women and 46,380 men in the period from 1986 to 2008. Predicted plasma 25(OH)D scores were computed using validated regression models that included major determinants of vitamin D status (race, ultraviolet B flux, physical activity, body mass index, estimated vitamin D intake, alcohol consumption, and postmenopausal hormone use in women). Hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated using Cox proportional hazards models. All statistical tests were two-sided. During 22 years of follow-up, we documented 201 cases of incident RCC in women and 207 cases in men. The multivariable hazard ratios between extreme quintiles of predicted 25(OH)D score were 0.50 (95% CI = 0.32 to 0.80) in women, 0.59 (95% CI = 0.37 to 0.94) in men, and 0.54 (95% CI = 0.39 to 0.75; P trend<.001) in the pooled cohorts. An increment of 10 ng/mL in predicted 25(OH)D score was associated with a 44% lower incidence of RCC (pooled HR = 0.56, 95% CI = 0.42 to 0.74). We found no statistically significant association between vitamin D intake estimated from food-frequency questionnaires and RCC incidence. Higher predicted plasma 25(OH)D levels were associated with a statistically significantly lower risk of RCC in men and women. Our findings need to be confirmed by other prospective studies using valid markers of long-term vitamin D status. Copyright © 2014 Elsevier Inc. All rights reserved.
Long aftershock sequences within continents and implications for earthquake hazard assessment.
Stein, Seth; Liu, Mian
2009-11-05
One of the most powerful features of plate tectonics is that the known plate motions give insight into both the locations and average recurrence interval of future large earthquakes on plate boundaries. Plate tectonics gives no insight, however, into where and when earthquakes will occur within plates, because the interiors of ideal plates should not deform. As a result, within plate interiors, assessments of earthquake hazards rely heavily on the assumption that the locations of small earthquakes shown by the short historical record reflect continuing deformation that will cause future large earthquakes. Here, however, we show that many of these recent earthquakes are probably aftershocks of large earthquakes that occurred hundreds of years ago. We present a simple model predicting that the length of aftershock sequences varies inversely with the rate at which faults are loaded. Aftershock sequences within the slowly deforming continents are predicted to be significantly longer than the decade typically observed at rapidly loaded plate boundaries. These predictions are in accord with observations. So the common practice of treating continental earthquakes as steady-state seismicity overestimates the hazard in presently active areas and underestimates it elsewhere.
Evaluating the influence of gully erosion on landslide hazard analysis triggered by heavy rainfall
NASA Astrophysics Data System (ADS)
Ruljigaljig, Tjuku; Tsai, Ching-Jun; Peng, Wen-Fei; Yu, Teng-To
2017-04-01
During the rainstorm period such as typhoon or heavy rain, the development of gully will induce a large-scale landslide. The purpose of this study is to assess and quantify the existence and development of gully for the purpose of triggering landslides by analyzing the landslides hazard. Firstly, based on multi-scale DEM data, this study uses wavelet transform to construct an automatic algorithm. The 1-meter DEM is used to evaluate the location and type of gully, and to establish an evaluation model for predicting erosion development.In this study, routes in the Chai-Yi were studied to clarify the damage potential of roadways from local gully. The local of gully is regarded as a parameter to reduce the strength parameter. The distribution of factor of safe (F.S.) is compared with the landslide inventory map. The result of this research could be used to increase the prediction accuracy of landslide hazard analysis due to heavy rainfalls.
Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.
2008-01-01
Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.
An early warning system for marine storm hazard mitigation
NASA Astrophysics Data System (ADS)
Vousdoukas, M. I.; Almeida, L. P.; Pacheco, A.; Ferreira, O.
2012-04-01
The present contribution presents efforts towards the development of an operational Early Warning System for storm hazard prediction and mitigation. The system consists of a calibrated nested-model train which consists of specially calibrated Wave Watch III, SWAN and XBeach models. The numerical simulations provide daily forecasts of the hydrodynamic conditions, morphological change and overtopping risk at the area of interest. The model predictions are processed by a 'translation' module which is based on site-specific Storm Impact Indicators (SIIs) (Ciavola et al., 2011, Storm impacts along European coastlines. Part 2: lessons learned from the MICORE project, Environmental Science & Policy, Vol 14), and warnings are issued when pre-defined threshold values are exceeded. For the present site the selected SIIs were (i) the maximum wave run-up height during the simulations; and (ii) the dune-foot horizontal retreat at the end of the simulations. Both SIIs and pre-defined thresholds were carefully selected on the grounds of existing experience and field data. Four risk levels were considered, each associated with an intervention approach, recommended to the responsible coastal protection authority. Regular updating of the topography/bathymetry is critical for the performance of the storm impact forecasting, especially when there are significant morphological changes. The system can be extended to other critical problems, like implications of global warming and adaptive management strategies, while the approach presently followed, from model calibration to the early warning system for storm hazard mitigation, can be applied to other sites worldwide, with minor adaptations.
USDA-ARS?s Scientific Manuscript database
Flash floods are an important component of the semi-arid hydrological cycle, and provide the potential for groundwater recharge as well as posing a dangerous natural hazard. A number of catchment models have been applied to flash flood prediction; however, in general they perform poorly. This study ...
2013-03-08
Mechanical Engineering University of Michigan Ann Arbor, MI 48109 ljch@umich.edu Paramsothy Jayakumar U.S. Army RDECOM-TARDEC Warren, MI 48397...5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jiechao Liu; Paramsothy Jayakumar ; James Overholt; Jeffrey Stein; Tulga Ersal 5d
NASA Astrophysics Data System (ADS)
Tao, J.; Barros, A. P.
2013-07-01
Debris flows associated with rainstorms are a frequent and devastating hazard in the Southern Appalachians in the United States. Whereas warm season events are clearly associated with heavy rainfall intensity, the same cannot be said for the cold season events. Instead, there is a relationship between large (cumulative) rainfall events independently of season, and thus hydrometeorological regime, and debris flows. This suggests that the dynamics of subsurface hydrologic processes play an important role as a trigger mechanism, specifically through soil moisture redistribution by interflow. The first objective of this study is to investigate this hypothesis. The second objective is to assess the physical basis for a regional coupled flood prediction and debris flow warning system. For this purpose, uncalibrated model simulations of well-documented debris flows in headwater catchments of the Southern Appalachians using a 3-D surface-groundwater hydrologic model coupled with slope stability models are examined in detail. Specifically, we focus on two vulnerable headwater catchments that experience frequent debris flows, the Big Creek and the Jonathan Creek in the Upper Pigeon River Basin, North Carolina, and three distinct weather systems: an extremely heavy summertime convective storm in 2011; a persistent winter storm lasting several days; and a severe winter storm in 2009. These events were selected due to the optimal availability of rainfall observations, availability of detailed field surveys of the landslides shortly after they occurred, which can be used to evaluate model predictions, and because they are representative of events that cause major economic losses in the region. The model results substantiate that interflow is a useful prognostic of conditions necessary for the initiation of slope instability, and should therefore be considered explicitly in landslide hazard assessments. Moreover, the relationships between slope stability and interflow are strongly modulated by the topography and catchment specific geomorphologic features that determine subsurface flow convergence zones. The three case-studies demonstrate the value of coupled prediction of flood response and debris flow initiation potential in the context of developing a regional hazard warning system.
Earthquake Forecasting System in Italy
NASA Astrophysics Data System (ADS)
Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.
2017-12-01
In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).
Flood hydrology and dam-breach hydraulic analyses of five reservoirs in Colorado
Stevens, Michael R.; Hoogestraat, Galen K.
2013-01-01
The U.S. Department of Agriculture Forest Service has identified hazard concerns for areas downstream from five Colorado dams on Forest Service land. In 2009, the U.S. Geological Survey, in cooperation with the Forest Service, initiated a flood hydrology analysis to estimate the areal extent of potential downstream flood inundation and hazard to downstream life, property, and infrastructure if dam breach occurs. Readily available information was used for dam-breach assessments of five small Colorado reservoirs (Balman Reservoir, Crystal Lake, Manitou Park Lake, McGinnis Lake, and Million Reservoir) that are impounded by an earthen dam, and no new data were collected for hydraulic modeling. For each reservoir, two dam-breach scenarios were modeled: (1) the dam is overtopped but does not fail (break), and (2) the dam is overtopped and dam-break occurs. The dam-breach scenarios were modeled in response to the 100-year recurrence, 500-year recurrence, and the probable maximum precipitation, 24-hour duration rainstorms to predict downstream flooding. For each dam-breach and storm scenario, a flood inundation map was constructed to estimate the extent of flooding in areas of concern downstream from each dam. Simulation results of the dam-break scenarios were used to determine the hazard classification of the dam structure (high, significant, or low), which is primarily based on the potential for loss of life and property damage resulting from the predicted downstream flooding.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Prospective Changes in Alcohol Use Among Hazardous Drinkers in the Absence of Treatment
Dearing, Ronda L.; Witkiewitz, Katie; Connors, Gerard J.; Walitzer, Kimberly S.
2012-01-01
Gaining a better understanding of the natural course of hazardous alcohol consumption could inform the development of brief interventions to encourage self-change. In the current study, hazardous drinkers (based on Alcohol Use Disorders Identification Test score) were recruited using advertisements to participate in a 2-year multi-wave prospective study. Participants (N = 206) provided self-reports every six months during the study, including reports of daily alcohol consumption. The current investigation focuses on self-initiated change in participants’ frequency of heavy drinking days (i.e., ≥ 5/4 drinks per day for men/women), as predicted by a number of demographic (e.g., age) and psychosocial (e.g., guilt-proneness) variables. Latent growth curve models of the change in percent heavy drinking days over the 2-year period provided an excellent fit to the observed data and indicated a significant decline in percent heavy drinking days over time. Reductions in heavy drinking frequency were predicted by younger age and higher guilt-proneness. The identification of these predictors of reductions in heavy drinking frequency provides information to guide future work investigating self-change among hazardous drinkers. PMID:22612252
Prospective changes in alcohol use among hazardous drinkers in the absence of treatment.
Dearing, Ronda L; Witkiewitz, Katie; Connors, Gerard J; Walitzer, Kimberly S
2013-03-01
Gaining a better understanding of the natural course of hazardous alcohol consumption could inform the development of brief interventions to encourage self-change. In the current study, hazardous drinkers (based on Alcohol Use Disorders Identification Test score) were recruited using advertisements to participate in a 2-year multiwave prospective study. Participants (n = 206) provided self-reports every six months during the study, including reports of daily alcohol consumption. The current investigation focuses on self-initiated change in participants' frequency of heavy drinking days (i.e., ≥ 5/4 drinks per day for men/women), as predicted by a number of demographic (e.g., age) and psychosocial (e.g., guilt-proneness) variables. Latent growth curve models of the change in percent heavy drinking days over the 2-year period provided an excellent fit to the observed data and indicated a significant decline in percent heavy drinking days over time. Reductions in heavy drinking frequency were predicted by younger age and higher guilt-proneness. The identification of these predictors of reductions in heavy drinking frequency provides information to guide future work investigating self-change among hazardous drinkers. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
A Bayesian-based system to assess wave-driven flooding hazards on coral reef-lined coasts
Pearson, S. G.; Storlazzi, Curt; van Dongeren, A. R.; Tissier, M. F. S.; Reniers, A. J. H. M.
2017-01-01
Many low-elevation, coral reef-lined, tropical coasts are vulnerable to the effects of climate change, sea level rise, and wave-induced flooding. The considerable morphological diversity of these coasts and the variability of the hydrodynamic forcing that they are exposed to make predicting wave-induced flooding a challenge. A process-based wave-resolving hydrodynamic model (XBeach Non-Hydrostatic, “XBNH”) was used to create a large synthetic database for use in a “Bayesian Estimator for Wave Attack in Reef Environments” (BEWARE), relating incident hydrodynamics and coral reef geomorphology to coastal flooding hazards on reef-lined coasts. Building on previous work, BEWARE improves system understanding of reef hydrodynamics by examining the intrinsic reef and extrinsic forcing factors controlling runup and flooding on reef-lined coasts. The Bayesian estimator has high predictive skill for the XBNH model outputs that are flooding indicators, and was validated for a number of available field cases. It was found that, in order to accurately predict flooding hazards, water depth over the reef flat, incident wave conditions, and reef flat width are the most essential factors, whereas other factors such as beach slope and bed friction due to the presence or absence of corals are less important. BEWARE is a potentially powerful tool for use in early warning systems or risk assessment studies, and can be used to make projections about how wave-induced flooding on coral reef-lined coasts may change due to climate change.
Water, ice and mud: Lahars and lahar hazards at ice- and snow-clad volcanoes
Waythomas, Christopher F.
2014-01-01
Large-volume lahars are significant hazards at ice and snow covered volcanoes. Hot eruptive products produced during explosive eruptions can generate a substantial volume of melt water that quickly evolves into highly mobile flows of ice, sediment and water. At present it is difficult to predict the size of lahars that can form at ice and snow covered volcanoes due to their complex flow character and behaviour. However, advances in experiments and numerical approaches are producing new conceptual models and new methods for hazard assessment. Eruption triggered lahars that are ice-dominated leave behind thin, almost unrecognizable sedimentary deposits, making them likely to be under-represented in the geological record.
Vulnerability Assessment Using LIDAR Data in Silang-Sta Rosa Subwatershed, Philippines
NASA Astrophysics Data System (ADS)
Bragais, M. A.; Magcale-Macandog, D. B.; Arizapa, J. L.; Manalo, K. M.
2016-10-01
Silang-Sta. Rosa Subwatershed is experiencing rapid urbanization. Its downstream area is already urbanized and the development is moving fast upstream. With the rapid land conversion of pervious to impervious areas and increase frequency of intense rainfall events, the downstream of the watershed is at risk of flood hazard. The widely used freeware HEC-RAS (Hydrologic Engineering Center- River Analysis System) model was used to implement the 2D unsteady flow analysis to develop a flood hazard map. The LiDAR derived digital elevation model (DEM) with 1m resolution provided detailed terrain that is vital for producing reliable flood extent map that can be used for early warning system. With the detailed information from the simulation like areas to be flooded, the predicted depth and duration, we can now provide specific flood forecasting and mitigation plan even at community level. The methodology of using 2D unsteady flow modelling and high resolution DEM in a watershed can be replicated to other neighbouring watersheds specially those areas that are not yet urbanized so that their development will be guided to be flood hazard resilient. LGUs all over the country will benefit from having a high resolution flood hazard map.
Automatic Hazard Detection for Landers
NASA Technical Reports Server (NTRS)
Huertas, Andres; Cheng, Yang; Matthies, Larry H.
2008-01-01
Unmanned planetary landers to date have landed 'blind'; that is, without the benefit of onboard landing hazard detection and avoidance systems. This constrains landing site selection to very benign terrain,which in turn constrains the scientific agenda of missions. The state of the art Entry, Descent, and Landing (EDL) technology can land a spacecraft on Mars somewhere within a 20-100km landing ellipse.Landing ellipses are very likely to contain hazards such as craters, discontinuities, steep slopes, and large rocks, than can cause mission-fatal damage. We briefly review sensor options for landing hazard detection and identify a perception approach based on stereo vision and shadow analysis that addresses the broadest set of missions. Our approach fuses stereo vision and monocular shadow-based rock detection to maximize spacecraft safety. We summarize performance models for slope estimation and rock detection within this approach and validate those models experimentally. Instantiating our model of rock detection reliability for Mars predicts that this approach can reduce the probability of failed landing by at least a factor of 4 in any given terrain. We also describe a rock detector/mapper applied to large-high-resolution images from the Mars Reconnaissance Orbiter (MRO) for landing site characterization and selection for Mars missions.
NASA Astrophysics Data System (ADS)
Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory
2018-02-01
We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.
Connors, Kristin A; Voutchkova-Kostal, Adelina M; Kostal, Jakub; Anastas, Paul; Zimmerman, Julie B; Brooks, Bryan W
2014-08-01
Basic toxicological information is lacking for the majority of industrial chemicals. In addition to increasing empirical toxicity data through additional testing, prospective computational approaches to drug development aim to serve as a rational basis for the design of chemicals with reduced toxicity. Recent work has resulted in the derivation of a "rule of 2," wherein chemicals with an octanol-water partition coefficient (log P) less than 2 and a difference between the lowest unoccupied molecular orbital and the highest occupied molecular orbital (ΔE) greater than 9 (log P<2 and ΔE >9 eV) are predicted to be 4 to 5 times less likely to elicit acute or chronic toxicity to model aquatic organisms. The present study examines potential reduction of aquatic toxicity hazards from industrial chemicals if these 2 molecular design guidelines were employed. Probabilistic hazard assessment approaches were used to model the likelihood of encountering industrial chemicals exceeding toxicological categories of concern both with and without the rule of 2. Modeling predicted that utilization of these molecular design guidelines for log P and ΔE would appreciably decrease the number of chemicals that would be designated to be of "high" and "very high" concern for acute and chronic toxicity to standard model aquatic organisms and end points as defined by the US Environmental Protection Agency. For example, 14.5% of chemicals were categorized as having high and very high acute toxicity to the fathead minnow model, whereas only 3.3% of chemicals conforming to the design guidelines were predicted to be in these categories. Considerations of specific chemical classes (e.g., aldehydes), chemical attributes (e.g., ionization), and adverse outcome pathways in representative species (e.g., receptor-mediated responses) could be used to derive future property guidelines for broader classes of contaminants. © 2014 SETAC.
Arrhythmic hazard map for a 3D whole-ventricles model under multiple ion channel block.
Okada, Jun-Ichi; Yoshinaga, Takashi; Kurokawa, Junko; Washio, Takumi; Furukawa, Tetsushi; Sawada, Kohei; Sugiura, Seiryo; Hisada, Toshiaki
2018-05-10
To date, proposed in silico models for preclinical cardiac safety testing are limited in their predictability and usability. We previously reported a multi-scale heart simulation that accurately predicts arrhythmogenic risk for benchmark drugs. We extend this approach and report the first comprehensive hazard map of drug-induced arrhythmia based on the exhaustive in silico electrocardiogram (ECG) database of drug effects, developed using a petaflop computer. A total of 9075 electrocardiograms constitute the five-dimensional hazard map, with coordinates representing the extent of the block of each of the five ionic currents (rapid delayed rectifier potassium current (IKr), fast (INa) and late (INa,L) components of the sodium current, L-type calcium current (ICa,L) and slow delayed rectifier current (IKs)), involved in arrhythmogenesis. Results of the evaluation of arrhythmogenic risk based on this hazard map agreed well with the risk assessments reported in three references. ECG database also suggested that the interval between the J-point and the T-wave peak is a superior index of arrhythmogenicity compared to other ECG biomarkers including the QT interval. Because concentration-dependent effects on electrocardiograms of any drug can be traced on this map based on in vitro current assay data, its arrhythmogenic risk can be evaluated without performing costly and potentially risky human electrophysiological assays. Hence, the map serves as a novel tool for use in pharmaceutical research and development. This article is protected by copyright. All rights reserved.
Multivariate Models for Prediction of Skin Sensitization Hazard in Humans
One of ICCVAM’s highest priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative me...
Multivariate Models for Prediction of Human Skin Sensitization Hazard
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative method...
Hazards posed by distal ash transport and sedimentation from extreme volcanic eruptions
NASA Astrophysics Data System (ADS)
Sahagian, D. L.; Proussevitch, A. A.; White, C. M.; Klewicki, J.
2016-12-01
Volcanic ash injected into the upper troposphere and lower stratosphere poses a significant hazard to aviation and human security as a result of extreme, explosive eruptions. These have occurred in the recent geologic past, and are expected to occur again, now that modern society and its infrastructure is far more vulnerable than ever before. Atmospheric transport, dispersion, and sedimentation of Ash particles is controlled by fundamentally different processes than control other particles normally transported in the atmosphere due to their complex internal and external morphology. It is thus necessary to elucidate the fundamental processes of particle-fluid interactions in the upper troposphere and lower stratosphere, where most air traffic resides, and thereby enhance the capability of volcanic ash transport models to predict the ash concentration in distal regions that pose aviation and other hazards. Current Volcanic Ash Transport and Dispersion (VATD) models use simplistic stokes settling velocities for larger ash particles, and treat smaller ash particles (that are a large part of the hazard) merely as passive tracers. By incorporating the dynamics of fine ash particle-atmosphere interactions into existing VATD models provides the foundation for a much more accurate assessment framework applied to the hazard posed by specific future extreme eruptions, and thus dramatically reduce both the risk to air traffic and the cost of airport and flight closures, in addition to human health, water quality, agricultural, infrastructure hazards, as well as ice cap albedo and short term climate impacts.
Sickness absence, moral hazard, and the business cycle.
Pichler, Stefan
2015-06-01
The procyclical nature of sickness absence has been documented by many scholars in literature. So far, explanations have been based on labor force composition and reduced moral hazard caused by fear of job loss during recessions. In this paper, we propose and test a third mechanism caused by reduced moral hazard during booms and infections. We suggest that the workload is higher during economic booms and thus employees have to go to work despite being sick. In a theoretical model focusing on infectious diseases, we show that this will provoke infections of coworkers leading to overall higher sickness absence during economic upturns. Using state-level aggregated data from 112 German public health insurance funds (out of 145 in total), we find that sickness absence due to infectious diseases shows the largest procyclical pattern, as predicted by our theoretical model. Copyright © 2014 John Wiley & Sons, Ltd.
Forbang, Nketi I; Michos, Erin D; McClelland, Robyn L; Remigio-Baker, Rosemay A; Allison, Matthew A; Sandfort, Veit; Ix, Joachim H; Thomas, Isac; Rifkin, Dena E; Criqui, Michael H
2016-11-01
Abdominal aortic calcium (AAC) and coronary artery calcium (CAC) independently and similarly predict cardiovascular disease (CVD) events. The standard AAC and CAC score, the Agatston method, upweights for greater calcium density, thereby modeling higher calcium density as a CVD hazard. Computed tomography scans were used to measure AAC and CAC volume and density in a multiethnic cohort of community-dwelling individuals, and Cox proportional hazard was used to determine their independent association with incident coronary heart disease (CHD, defined as myocardial infarction, resuscitated cardiac arrest, or CHD death), cardiovascular disease (CVD, defined as CHD plus stroke and stroke death), and all-cause mortality. In 997 participants with Agatston AAC and CAC scores >0, the mean age was 66±9 years, and 58% were men. During an average follow-up of 9 years, there were 77 CHD, 118 CVD, and 169 all-cause mortality events. In mutually adjusted models, additionally adjusted for CVD risk factors, an increase in ln(AAC volume) per standard deviation was significantly associated with increased all-cause mortality (hazard ratio=1.20; 95% confidence interval, 1.08-1.33; P<0.01) and an increased ln(CAC volume) per standard deviation was significantly associated with CHD (hazard ratio=1.17; 95% confidence interval, 1.04-1.59; P=0.02) and CVD (hazard ratio=1.20; 95% confidence interval, 1.05-1.36; P<0.01). In contrast, both AAC and CAC density were not significantly associated with CVD events. The Agatston method of upweighting calcium scores for greater density may be inappropriate for CVD risk prediction in both the abdominal aorta and coronary arteries. © 2016 American Heart Association, Inc.
Rail-highway crossing hazard prediction : research results
DOT National Transportation Integrated Search
1979-12-01
This document presents techniques for constructing and evaluating railroad grade : crossing hazard indexes. Hazard indexes are objective formulas for comparing or ranking : crossings according to relative hazard or for calculating absolute hazard (co...
Verdin, Kristine L.; Dupree, Jean A.; Stevens, Michael R.
2013-01-01
This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the 2013 West Fork Fire Complex near South Fork in southwestern Colorado. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence, potential volume of debris flows, and the combined debris-flow hazard ranking along the drainage network within and just downstream from the burned area, and to estimate the same for 54 drainage basins of interest within the perimeter of the burned area. Input data for the debris-flow models included topographic variables, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall, referred to as a 2-year storm; (2) 10-year-recurrence, 1-hour-duration rainfall, referred to as a 10-year storm; and (3) 25-year-recurrence, 1-hour-duration rainfall, referred to as a 25-year storm. Estimated debris-flow probabilities at the pour points of the 54 drainage basins of interest ranged from less than 1 to 65 percent in response to the 2-year storm; from 1 to 77 percent in response to the 10-year storm; and from 1 to 83 percent in response to the 25-year storm. Twelve of the 54 drainage basins of interest have a 30-percent probability or greater of producing a debris flow in response to the 25-year storm. Estimated debris-flow volumes for all rainfalls modeled range from a low of 2,400 cubic meters to a high of greater than 100,000 cubic meters. Estimated debris-flow volumes increase with basin size and distance along the drainage network, but some smaller drainages also were predicted to produce substantial debris flows. One of the 54 drainage basins of interest had the highest combined hazard ranking, while 9 other basins had the second highest combined hazard ranking. Of these 10 basins with the 2 highest combined hazard rankings, 7 basins had predicted debris-flow volumes exceeding 100,000 cubic meters, while 3 had predicted probabilities of debris flows exceeding 60 percent. The 10 basins with high combined hazard ranking include 3 tributaries in the headwaters of Trout Creek, four tributaries to the West Fork San Juan River, Hope Creek draining toward a county road on the eastern edge of the burn, Lake Fork draining to U.S. Highway 160, and Leopard Creek on the northern edge of the burn. The probabilities and volumes for the modeled storms indicate a potential for debris-flow impacts on structures, reservoirs, roads, bridges, and culverts located within and immediately downstream from the burned area. U.S. Highway 160, on the eastern edge of the burn area, also is susceptible to impacts from debris flows.
SU-F-R-24: Identifying Prognostic Imaging Biomarkers in Early Stage Lung Cancer Using Radiomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, X; Wu, J; Cui, Y
2016-06-15
Purpose: Patients diagnosed with early stage lung cancer have favorable outcomes when treated with surgery or stereotactic radiotherapy. However, a significant proportion (∼20%) of patients will develop metastatic disease and eventually die of the disease. The purpose of this work is to identify quantitative imaging biomarkers from CT for predicting overall survival in early stage lung cancer. Methods: In this institutional review board-approved HIPPA-compliant retrospective study, we retrospectively analyzed the diagnostic CT scans of 110 patients with early stage lung cancer. Data from 70 patients were used for training/discovery purposes, while those of remaining 40 patients were used for independentmore » validation. We extracted 191 radiomic features, including statistical, histogram, morphological, and texture features. Cox proportional hazard regression model, coupled with the least absolute shrinkage and selection operator (LASSO), was used to predict overall survival based on the radiomic features. Results: The optimal prognostic model included three image features from the Law’s feature and wavelet texture. In the discovery cohort, this model achieved a concordance index or CI=0.67, and it separated the low-risk from high-risk groups in predicting overall survival (hazard ratio=2.72, log-rank p=0.007). In the independent validation cohort, this radiomic signature achieved a CI=0.62, and significantly stratified the low-risk and high-risk groups in terms of overall survival (hazard ratio=2.20, log-rank p=0.042). Conclusion: We identified CT imaging characteristics associated with overall survival in early stage lung cancer. If prospectively validated, this could potentially help identify high-risk patients who might benefit from adjuvant systemic therapy.« less
Koenecke, Christian; Göhring, Gudrun; de Wreede, Liesbeth C.; van Biezen, Anja; Scheid, Christof; Volin, Liisa; Maertens, Johan; Finke, Jürgen; Schaap, Nicolaas; Robin, Marie; Passweg, Jakob; Cornelissen, Jan; Beelen, Dietrich; Heuser, Michael; de Witte, Theo; Kröger, Nicolaus
2015-01-01
The aim of this study was to determine the impact of the revised 5-group International Prognostic Scoring System cytogenetic classification on outcome after allogeneic stem cell transplantation in patients with myelodysplastic syndromes or secondary acute myeloid leukemia who were reported to the European Society for Blood and Marrow Transplantation database. A total of 903 patients had sufficient cytogenetic information available at stem cell transplantation to be classified according to the 5-group classification. Poor and very poor risk according to this classification was an independent predictor of shorter relapse-free survival (hazard ratio 1.40 and 2.14), overall survival (hazard ratio 1.38 and 2.14), and significantly higher cumulative incidence of relapse (hazard ratio 1.64 and 2.76), compared to patients with very good, good or intermediate risk. When comparing the predictive performance of a series of Cox models both for relapse-free survival and for overall survival, a model with simplified 5-group cytogenetics (merging very good, good and intermediate cytogenetics) performed best. Furthermore, monosomal karyotype is an additional negative predictor for outcome within patients of the poor, but not the very poor risk group of the 5-group classification. The revised International Prognostic Scoring System cytogenetic classification allows patients with myelodysplastic syndromes to be separated into three groups with clearly different outcomes after stem cell transplantation. Poor and very poor risk cytogenetics were strong predictors of poor patient outcome. The new cytogenetic classification added value to prediction of patient outcome compared to prediction models using only traditional risk factors or the 3-group International Prognostic Scoring System cytogenetic classification. PMID:25552702
Elevated Plasma CXCL12α Is Associated with a Poorer Prognosis in Pulmonary Arterial Hypertension
Li, Lili; O’Connell, Caroline; Codd, Mary; Lawrie, Allan; Morton, Allison; Kiely, David G.; Condliffe, Robin; Elliot, Charles; McLoughlin, Paul; Gaine, Sean
2015-01-01
Rationale Recent work in preclinical models suggests that signalling via the pro-angiogenic and pro-inflammatory cytokine, CXCL12 (SDF-1), plays an important pathogenic role in pulmonary hypertension (PH). The objective of this study was to establish whether circulating concentrations of CXCL12α were elevated in patients with PAH and related to mortality. Methods Plasma samples were collected from patients with idiopathic pulmonary arterial hypertension (IPAH) and PAH associated with connective tissue diseases (CTD-PAH) attending two pulmonary hypertension referral centres (n = 95) and from age and gender matched healthy controls (n = 44). Patients were subsequently monitored throughout a period of five years. Results CXCL12α concentrations were elevated in PAH groups compared to controls (P<0.05) and receiver-operating-characteristic analysis showed that plasma CXCL12α concentrations discriminated patients from healthy controls (AUC 0.80, 95% confidence interval 0.73-0.88). Kaplan Meier analysis indicated that elevated plasma CXCL12α concentration was associated with reduced survival (P<0.01). Multivariate Cox proportional hazards model showed that elevated CXCL12α independently predicted (P<0.05) earlier death in PAH with a hazard ratio (95% confidence interval) of 2.25 (1.01-5.00). In the largest subset by WHO functional class (Class 3, 65% of patients) elevated CXCL12α independently predicted (P<0.05) earlier death, hazard ratio 2.27 (1.05-4.89). Conclusions Our data show that elevated concentrations of circulating CXCL12α in PAH predicted poorer survival. Furthermore, elevated circulating CXCL12α was an independent risk factor for death that could potentially be included in a prognostic model and guide therapy. PMID:25856504
Elevated plasma CXCL12α is associated with a poorer prognosis in pulmonary arterial hypertension.
McCullagh, Brian N; Costello, Christine M; Li, Lili; O'Connell, Caroline; Codd, Mary; Lawrie, Allan; Morton, Allison; Kiely, David G; Condliffe, Robin; Elliot, Charles; McLoughlin, Paul; Gaine, Sean
2015-01-01
Recent work in preclinical models suggests that signalling via the pro-angiogenic and pro-inflammatory cytokine, CXCL12 (SDF-1), plays an important pathogenic role in pulmonary hypertension (PH). The objective of this study was to establish whether circulating concentrations of CXCL12α were elevated in patients with PAH and related to mortality. Plasma samples were collected from patients with idiopathic pulmonary arterial hypertension (IPAH) and PAH associated with connective tissue diseases (CTD-PAH) attending two pulmonary hypertension referral centres (n = 95) and from age and gender matched healthy controls (n = 44). Patients were subsequently monitored throughout a period of five years. CXCL12α concentrations were elevated in PAH groups compared to controls (P<0.05) and receiver-operating-characteristic analysis showed that plasma CXCL12α concentrations discriminated patients from healthy controls (AUC 0.80, 95% confidence interval 0.73-0.88). Kaplan Meier analysis indicated that elevated plasma CXCL12α concentration was associated with reduced survival (P<0.01). Multivariate Cox proportional hazards model showed that elevated CXCL12α independently predicted (P<0.05) earlier death in PAH with a hazard ratio (95% confidence interval) of 2.25 (1.01-5.00). In the largest subset by WHO functional class (Class 3, 65% of patients) elevated CXCL12α independently predicted (P<0.05) earlier death, hazard ratio 2.27 (1.05-4.89). Our data show that elevated concentrations of circulating CXCL12α in PAH predicted poorer survival. Furthermore, elevated circulating CXCL12α was an independent risk factor for death that could potentially be included in a prognostic model and guide therapy.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
J Waves for Predicting Cardiac Events in Hypertrophic Cardiomyopathy.
Tsuda, Toyonobu; Hayashi, Kenshi; Konno, Tetsuo; Sakata, Kenji; Fujita, Takashi; Hodatsu, Akihiko; Nagata, Yoji; Teramoto, Ryota; Nomura, Akihiro; Tanaka, Yoshihiro; Furusho, Hiroshi; Takamura, Masayuki; Kawashiri, Masa-Aki; Fujino, Noboru; Yamagishi, Masakazu
2017-10-01
This study sought to investigate whether the presence of J waves was associated with cardiac events in patients with hypertrophic cardiomyopathy (HCM). It has been uncertain whether the presence of J waves predicts life-threatening cardiac events in patients with HCM. This study evaluated consecutive 338 patients with HCM (207 men; age 61 ± 17 years of age). A J-wave was defined as J-point elevation >0.1 mV in at least 2 contiguous inferior and/or lateral leads. Cardiac events were defined as sudden cardiac death, ventricular fibrillation or sustained ventricular tachycardia, or appropriate implantable cardiac defibrillator therapy. The study also investigated whether adding the J-wave in a conventional risk model improved a prediction of cardiac events. J waves were seen in 46 (13.6%) patients at registration. Cardiac events occurred in 31 patients (9.2%) during median follow-up of 4.9 years (interquartile range: 2.6 to 7.1 years). In a Cox proportional hazards model, the presence of J waves was significantly associated with cardiac events (adjusted hazard ratio: 4.01; 95% confidence interval [CI]: 1.78 to 9.05; p = 0.001). Compared with the conventional risk model, the model using J waves in addition to conventional risks better predicted cardiac events (net reclassification improvement, 0.55; 95% CI: 0.20 to 0.90; p = 0.002). The presence of J waves was significantly associated with cardiac events in HCM. Adding J waves to conventional cardiac risk factors improved prediction of cardiac events. Further confirmatory studies are needed before considering J-point elevation as a marker of risk for use in making management decisions regarding risk in patients with HCM. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.
2012-01-01
We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.
Separating spatial search and efficiency rates as components of predation risk
DeCesare, Nicholas J.
2012-01-01
Predation risk is an important driver of ecosystems, and local spatial variation in risk can have population-level consequences by affecting multiple components of the predation process. I use resource selection and proportional hazard time-to-event modelling to assess the spatial drivers of two key components of risk—the search rate (i.e. aggregative response) and predation efficiency rate (i.e. functional response)—imposed by wolves (Canis lupus) in a multi-prey system. In my study area, both components of risk increased according to topographic variation, but anthropogenic features affected only the search rate. Predicted models of the cumulative hazard, or risk of a kill, underlying wolf search paths validated well with broad-scale variation in kill rates, suggesting that spatial hazard models provide a means of scaling up from local heterogeneity in predation risk to population-level dynamics in predator–prey systems. Additionally, I estimated an integrated model of relative spatial predation risk as the product of the search and efficiency rates, combining the distinct contributions of spatial heterogeneity to each component of risk. PMID:22977145
Role of beach morphology in wave overtopping hazard assessment
NASA Astrophysics Data System (ADS)
Phillips, Benjamin; Brown, Jennifer; Bidlot, Jean-Raymond; Plater, Andrew
2017-04-01
Understanding the role of beach morphology in controlling wave overtopping volume will further minimise uncertainties in flood risk assessments at coastal locations defended by engineered structures worldwide. XBeach is used to model wave overtopping volume for a 1:200 yr joint probability distribution of waves and water levels with measured, pre- and post-storm beach profiles. The simulation with measured bathymetry is repeated with and without morphological evolution enabled during the modelled storm event. This research assesses the role of morphology in controlling wave overtopping volumes for hazardous events that meet the typical design level of coastal defence structures. Results show disabling storm-driven morphology under-represents modelled wave overtopping volumes by up to 39% under high Hs conditions, and has a greater impact on the wave overtopping rate than the variability applied within the boundary conditions due to the range of wave-water level combinations that meet the 1:200 yr joint probability criterion. Accounting for morphology in flood modelling is therefore critical for accurately predicting wave overtopping volumes and the resulting flood hazard and to assess economic losses.
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
NASA Astrophysics Data System (ADS)
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
Roldan-Valadez, Ernesto; Rios, Camilo; Motola-Kuba, Daniel; Matus-Santos, Juan; Villa, Antonio R; Moreno-Jimenez, Sergio
2016-11-01
A long-lasting concern has prevailed for the identification of predictive biomarkers for high-grade gliomas (HGGs) using MRI. However, a consensus of which imaging parameters assemble a significant survival model is still missing in the literature; we investigated the significant positive or negative contribution of several MR biomarkers in this tumour prognosis. A retrospective cohort of supratentorial HGGs [11 glioblastoma multiforme (GBM) and 17 anaplastic astrocytomas] included 28 patients (9 females and 19 males, respectively, with a mean age of 50.4 years, standard deviation: 16.28 years; range: 13-85 years). Oedema and viable tumour measurements were acquired using regions of interest in T 1 weighted, T 2 weighted, fluid-attenuated inversion recovery, apparent diffusion coefficient (ADC) and MR spectroscopy (MRS). We calculated Kaplan-Meier curves and obtained Cox's proportional hazards. During the follow-up period (3-98 months), 17 deaths were recorded. The median survival time was 1.73 years (range, 0.287-8.947 years). Only 3 out of 20 covariates (choline-to-N-acetyl aspartate and lipids-lactate-to-creatine ratios and age) showed significance in explaining the variability in the survival hazards model; score test: χ 2 (3) = 9.098, p = 0.028. MRS metabolites overcome volumetric parameters of peritumoral oedema and viable tumour, as well as tumour region ADC measurements. Specific MRS ratios (Cho/Naa, L-L/Cr) might be considered in a regular follow-up for these tumours. Advances in knowledge: Cho/Naa ratio is the strongest survival predictor with a log-hazard function of 2.672 in GBM. Low levels of lipids-lactate/Cr ratio represent up to a 41.6% reduction in the risk of death in GBM.
Kendler, Kenneth S.; Lönn, Sara Larsson; Sundquist, Jan; Sundquist, Kristina
2015-01-01
Objective The purpose of this study was to clarify the causes of the smoking-schizophrenia association. Method Using Cox proportional hazard and co-relative control models, the authors predicted future risk for a diagnosis of schizophrenia or nonaffective psychosis from the smoking status of 1,413,849 women and 233,879 men from, respectively, the Swedish birth and conscript registries. Results Smoking was assessed in women at a mean age of 27 and in men at a mean age of 18. The mean age at end of follow-up was 46 for women and 26 for men. Hazard ratios for first-onset schizophrenia were elevated both for light smoking (2.21 [95% CI=1.90–2.56] for women and 2.15 [95% CI=1.25–3.44] for men) and heavy smoking (3.45 [95% CI=2.95–4.03] for women and 3.80 [95% CI=1.19–6.60] for men). These associations did not decline when schizophrenia onsets 3–5 years after smoking assessment were censored. When age, socioeconomic status, and drug abuse were controlled for, hazard ratios declined only modestly in both samples. Women who smoked into late pregnancy had a much higher risk for schizophrenia than those who quit early. Hazard ratios predicting nonaffective psychosis in the general population, in cousins, in half siblings, and in full siblings discordant for heavy smoking were, respectively, 2.67, 2.71, 2.54, and 2.18. A model utilizing all relative pairs predicted a hazard ratio of 1.69 (95% CI=1.17–2.44) for nonaffective psychosis in the heavy-smoking member of discordant monozygotic twin pairs. Conclusions Smoking prospectively predicts risk for schizophrenia. This association does not arise from smoking onset during a schizophrenic prodrome and demonstrates a clear dose-response relationship. While little of this association is explained by epidemiological confounders, a portion arises from common familial/genetic risk factors. However, in full siblings and especially monozygotic twins discordant for smoking, risk for nonaffective psychosis is appreciably higher in the smoking member. These results can help in evaluating the plausibility of various etiological hypotheses for the smoking-schizophrenia association. PMID:26046339
Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site
NASA Astrophysics Data System (ADS)
Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.
2012-04-01
The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological Development, Area "Environment", Activity 1.3.3.1 "Prediction of triggering and risk assessment for landslides".
Murray, Nigel P; Aedo, Socrates; Fuentealba, Cynthia; Jacob, Omar; Reyes, Eduardo; Novoa, Camilo; Orellana, Sebastian; Orellana, Nelson
2016-10-01
To establish a prediction model for early biochemical failure based on the Cancer of the Prostate Risk Assessment (CAPRA) score, the presence or absence of primary circulating prostate cells (CPC) and the number of primary CPC (nCPC)/8ml blood sample is detected before surgery. A prospective single-center study of men who underwent radical prostatectomy as monotherapy for prostate cancer. Clinical-pathological findings were used to calculate the CAPRA score. Before surgery blood was taken for CPC detection, mononuclear cells were obtained using differential gel centrifugation, and CPCs identified using immunocytochemistry. A CPC was defined as a cell expressing prostate-specific antigen and P504S, and the presence or absence of CPCs and the number of cells detected/8ml blood sample was registered. Patients were followed up for up to 5 years; biochemical failure was defined as a prostate-specific antigen>0.2ng/ml. The validity of the CAPRA score was calibrated using partial validation, and the fractional polynomial Cox proportional hazard regression was used to build 3 models, which underwent a decision analysis curve to determine the predictive value of the 3 models with respect to biochemical failure. A total of 267 men participated, mean age 65.80 years, and after 5 years of follow-up the biochemical-free survival was 67.42%. The model using CAPRA score showed a hazards ratio (HR) of 5.76 between low and high-risk groups, that of CPC with a HR of 26.84 between positive and negative groups, and the combined model showed a HR of 4.16 for CAPRA score and 19.93 for CPC. Using the continuous variable nCPC, there was no improvement in the predictive value of the model compared with the model using a positive-negative result of CPC detection. The combined CAPRA-nCPC model showed an improvement of the predictive performance for biochemical failure using the Harrell׳s C concordance test and a net benefit on DCA in comparison with either model used separately. The use of primary CPC as a predictive factor based on their presence or absence did not predict aggressive disease or biochemical failure. Although the use of a combined CAPRA-nCPC model improves the prediction of biochemical failure in patients undergoing radical prostatectomy for prostate cancer, this is minimal. The use of the presence or absence of primary CPCs alone did not predict aggressive disease or biochemical failure. Copyright © 2016 Elsevier Inc. All rights reserved.
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Advancing adverse outcome pathways for integrated toxicology and regulatory applications
Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...
Fun with High Throughput Toxicokinetics (CalEPA webinar)
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21. These chemicals are tested in part because there are limited or no data on hazard, exposure, or toxicokinetics (TK). TK models aid in predicting tissue concentrations ...
HTTK: R Package for High-Throughput Toxicokinetics
Thousands of chemicals have been profiled by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concent...
Validation of ISS Floating Potential Measurement Unit Electron Densities and Temperatures
NASA Technical Reports Server (NTRS)
Coffey, Victoria N.; Minow, Joseph I.; Parker, Linda N.; Bui, Them; Wright, Kenneth, Jr.; Koontz, Steven L.; Schneider, T.; Vaughn, J.; Craven, P.
2007-01-01
Validation of the Floating Potential Measurement Unit (FPMU) electron density and temperature measurements is an important step in the process of evaluating International Space Station spacecraft charging issues .including vehicle arcing and hazards to crew during extravehicular activities. The highest potentials observed on Space Station are due to the combined VxB effects on a large spacecraft and the collection of ionospheric electron and ion currents by the 160 V US solar array modules. Ionospheric electron environments are needed for input to the ISS spacecraft charging models used to predict the severity and frequency of occurrence of ISS charging hazards. Validation of these charging models requires comparing their predictions with measured FPMU values. Of course, the FPMU measurements themselves must also be validated independently for use in manned flight safety work. This presentation compares electron density and temperatures derived from the FPMU Langmuir probes and Plasma Impedance Probe against the independent density and temperature measurements from ultraviolet imagers, ground based incoherent scatter radar, and ionosonde sites.
Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan
2017-04-01
The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.
NASA Astrophysics Data System (ADS)
Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.
2014-12-01
The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.
Giri, Veda N.; Egleston, Brian; Ruth, Karen; Uzzo, Robert G.; Chen, David Y.T.; Buyyounouski, Mark; Raysor, Susan; Hooker, Stanley; Torres, Jada Benn; Ramike, Teniel; Mastalski, Kathleen; Kim, Taylor Y.; Kittles, Rick
2008-01-01
Introduction “Race-specific” PSA needs evaluation in men at high-risk for prostate cancer (PCA) for optimizing early detection. Baseline PSA and longitudinal prediction for PCA was examined by self-reported race and genetic West African (WA) ancestry in the Prostate Cancer Risk Assessment Program, a prospective high-risk cohort. Materials and Methods Eligibility criteria are age 35–69 years, FH of PCA, African American (AA) race, or BRCA1/2 mutations. Biopsies have been performed at low PSA values (<4.0 ng/mL). WA ancestry was discerned by genotyping 100 ancestry informative markers. Cox proportional hazards models evaluated baseline PSA, self-reported race, and genetic WA ancestry. Cox models were used for 3-year predictions for PCA. Results 646 men (63% AA) were analyzed. Individual WA ancestry estimates varied widely among self-reported AA men. “Race-specific” differences in baseline PSA were not found by self-reported race or genetic WA ancestry. Among men with ≥ 1 follow-up visit (405 total, 54% AA), three-year prediction for PCA with a PSA of 1.5–4.0 ng/mL was higher in AA men with age in the model (p=0.025) compared to EA men. Hazard ratios of PSA for PCA were also higher by self-reported race (1.59 for AA vs. 1.32 for EA, p=0.04). There was a trend for increasing prediction for PCA with increasing genetic WA ancestry. Conclusions “Race-specific” PSA may need to be redefined as higher prediction for PCA at any given PSA in AA men. Large-scale studies are needed to confirm if genetic WA ancestry explains these findings to make progress in personalizing PCA early detection. PMID:19240249
A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta
NASA Astrophysics Data System (ADS)
Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.
2015-12-01
Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.
Using Claims Data to Predict Dependency in Activities of Daily Living as a Proxy for Frailty
Faurot, Keturah R.; Funk, Michele Jonsson; Pate, Virginia; Brookhart, M. Alan; Patrick, Amanda; Hanson, Laura C.; Castillo, Wendy Camelo; Stürmer, Til
2014-01-01
Purpose Estimating drug effectiveness and safety among older adults in population-based studies using administrative healthcare claims can be hampered by unmeasured confounding due to frailty. A claims-based algorithm that identifies patients likely to be dependent, a proxy for frailty, may improve confounding control. Our objective was to develop an algorithm to predict dependency in activities of daily living (ADL) in a sample of Medicare beneficiaries. Methods Community-dwelling respondents to the 2006 Medicare Current Beneficiary Survey, >65 years old, with Medicare Part A, B, home health, and hospice claims were included. ADL dependency was defined as needing help with bathing, eating, walking, dressing, toileting, or transferring. Potential predictors were demographics, ICD-9 diagnosis/procedure and durable medical equipment codes for frailty-associated conditions. Multivariable logistic regression was to predict ADL dependency. Cox models estimated hazard ratios for death as a function of observed and predicted ADL dependency. Results Of 6391 respondents, 57% were female, 88% white, and 38% were ≥80. The prevalence of ADL dependency was 9.5%. Strong predictors of ADL dependency were charges for a home hospital bed (OR=5.44, 95% CI=3.28–9.03) and wheelchair (OR=3.91, 95% CI=2.78–5.51). The c-statistic of the final model was 0.845. Model-predicted ADL dependency of 20% or greater was associated with a hazard ratio for death of 3.19 (95% CI: 2.78, 3.68). Conclusions An algorithm for predicting ADL dependency using healthcare claims was developed to measure some aspects of frailty. Accounting for variation in frailty among older adults could lead to more valid conclusions about treatment use, safety, and effectiveness. PMID:25335470
Outcomes-Balanced Framework for Emergency Management: A Predictive Model for Preparedness
2013-09-01
Management Total Quality Management (TQM) was developed by W. Edwards Deming in the post-World War II reconstruction period in Japan. It ushered in a...FIGURES Figure 1. From Total Quality Management Principles ....................................................30 Figure 2. Outcomes Logic Model (After...THIRA Threat and Hazard Identification and Risk Assessment TQM Total Quality Management UTL Universal Task List xiv ACKNOWLEDGMENTS German
Siberry, George K; Harris, D. Robert; Oliveira, Ricardo Hugo; Krauss, Margot R.; Hofer, Cristina B.; Tiraboschi, Adriana Aparecida; Marques, Heloisa; Succi, Regina C.; Abreu, Thalita; Negra, Marinella Della; Mofenson, Lynne M.; Hazra, Rohan
2012-01-01
Background This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly-active antiretroviral therapy (HAART). Methods Cox proportional hazards modeling was used to assess the adjusted risk of World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART ≥ 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies/mL, with model fit evaluated on the basis of the minimum Akaike Information Criterion (AIC) value, a standard model fit statistic. Results Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cutpoints of > 2600 copies/mL and > 32,000 copies/mL corresponded to the lowest AIC values and were associated with the highest hazard ratios [2.0 (p = 0.015) and 2.1 (p = 0.0058), respectively] for WHO events. Conclusions In HIV-infected Latin American children on stable HAART, two distinct VL thresholds (> 2,600 copies/mL and > 32,000 copies/mL) were identified for predicting children at significantly increased risk of HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors. PMID:22343177
Shea, Cristina A; Ward, Rachel E; Welch, Sarah A; Kiely, Dan K; Goldstein, Richard; Bean, Jonathan F
2018-06-01
The aim of the study was to examine whether the chair stand component of the Short Physical Performance Battery predicts fall-related injury among older adult primary care patients. A 2-yr longitudinal cohort study of 430 Boston-area primary care patients aged ≥65 yrs screened to be at risk for mobility decline was conducted. The three components of the Short Physical Performance Battery (balance time, gait speed, and chair stand time) were measured at baseline. Participants reported incidence of fall-related injuries quarterly for 2 yrs. Complementary log-log discrete time hazard models were constructed to examine the hazard of fall-related injury across Short Physical Performance Battery scores, adjusting for age, sex, race, Digit Symbol Substitution Test score, and fall history. Participants were 68% female and 83% white, with a mean (SD) age of 76.6 (7.0). A total of 137 (32%) reported a fall-related injury during the follow-up period. Overall, inability to perform the chair stand task was a significant predictor of fall-related injury (hazard ratio = 2.11, 95% confidence interval = 1.23-3.62, P = 0.01). Total Short Physical Performance Battery score, gait component score, and balance component score were not predictive of fall-related injury. Inability to perform the repeated chair stand task was associated with increased hazard of an injurious fall for 2 yrs among a cohort of older adult primary care patients.
Interconnected ponds operation for flood hazard distribution
NASA Astrophysics Data System (ADS)
Putra, S. S.; Ridwan, B. W.
2016-05-01
The climatic anomaly, which comes with extreme rainfall, will increase the flood hazard in an area within a short period of time. The river capacity in discharging the flood is not continuous along the river stretch and sensitive to the flood peak. This paper contains the alternatives on how to locate the flood retention pond that are physically feasible to reduce the flood peak. The flood ponds were designed based on flood curve number criteria (TR-55, USDA) with the aim of rapid flood peak capturing and gradual flood retuning back to the river. As a case study, the hydrologic condition of upper Ciliwung river basin with several presumed flood pond locations was conceptually designed. A fundamental tank model that reproducing the operation of interconnected ponds was elaborated to achieve the designed flood discharge that will flows to the downstream area. The flood hazard distribution status, as the model performance criteria, will be computed within Ciliwung river reach in Manggarai Sluice Gate spot. The predicted hazard reduction with the operation of the interconnected retention area result had been bench marked with the normal flow condition.
Trimming the UCERF2 hazard logic tree
Porter, Keith A.; Field, Edward H.; Milner, Kevin
2012-01-01
The Uniform California Earthquake Rupture Forecast 2 (UCERF2) is a fully time‐dependent earthquake rupture forecast developed with sponsorship of the California Earthquake Authority (Working Group on California Earthquake Probabilities [WGCEP], 2007; Field et al., 2009). UCERF2 contains 480 logic‐tree branches reflecting choices among nine modeling uncertainties in the earthquake rate model shown in Figure 1. For seismic hazard analysis, it is also necessary to choose a ground‐motion‐prediction equation (GMPE) and set its parameters. Choosing among four next‐generation attenuation (NGA) relationships results in a total of 1920 hazard calculations per site. The present work is motivated by a desire to reduce the computational effort involved in a hazard analysis without understating uncertainty. We set out to assess which branching points of the UCERF2 logic tree contribute most to overall uncertainty, and which might be safely ignored (set to only one branch) without significantly biasing results or affecting some useful measure of uncertainty. The trimmed logic tree will have all of the original choices from the branching points that contribute significantly to uncertainty, but only one arbitrarily selected choice from the branching points that do not.
A microRNA-based prediction model for lymph node metastasis in hepatocellular carcinoma.
Zhang, Li; Xiang, Zuo-Lin; Zeng, Zhao-Chong; Fan, Jia; Tang, Zhao-You; Zhao, Xiao-Mei
2016-01-19
We developed an efficient microRNA (miRNA) model that could predict the risk of lymph node metastasis (LNM) in hepatocellular carcinoma (HCC). We first evaluated a training cohort of 192 HCC patients after hepatectomy and found five LNM associated predictive factors: vascular invasion, Barcelona Clinic Liver Cancer stage, miR-145, miR-31, and miR-92a. The five statistically independent factors were used to develop a predictive model. The predictive value of the miRNA-based model was confirmed in a validation cohort of 209 consecutive HCC patients. The prediction model was scored for LNM risk from 0 to 8. The cutoff value 4 was used to distinguish high-risk and low-risk groups. The model sensitivity and specificity was 69.6 and 80.2%, respectively, during 5 years in the validation cohort. And the area under the curve (AUC) for the miRNA-based prognostic model was 0.860. The 5-year positive and negative predictive values of the model in the validation cohort were 30.3 and 95.5%, respectively. Cox regression analysis revealed that the LNM hazard ratio of the high-risk versus low-risk groups was 11.751 (95% CI, 5.110-27.021; P < 0.001) in the validation cohort. In conclusion, the miRNA-based model is reliable and accurate for the early prediction of LNM in patients with HCC.
Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras
2018-05-01
The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.
Tales from the Paleoclimate Underground: Lessons Learned from Reconstructing Extreme Events
NASA Astrophysics Data System (ADS)
Frappier, A. E.
2017-12-01
Tracing patterns of paleoclimate extremes over the past two millennia is becoming ever more important in the effort to understand and predict costly weather hazards and their varied societal impacts. I present three paleoclimate vignettes from the past ten years of different paleotempestology projects I have worked on closely, illustrating our collective challenges and productive pathways in reconstructing rainfall extremes: temporal, spatial, and combining information from disparate proxies. Finally, I aim to share new results from modeling multiple extremes and hazards in Yucatan, a climate change hotspot.
Staley, Dennis M.; Gartner, Joseph E.; Smoczyk, Greg M.; Reeves, Ryan R.
2013-01-01
Wildfire dramatically alters the hydrologic response of a watershed such that even modest rainstorms can produce dangerous flash floods and debris flows. We use empirical models to predict the probability and magnitude of debris flow occurrence in response to a 10-year rainstorm for the 2013 Mountain fire near Palm Springs, California. Overall, the models predict a relatively high probability (60–100 percent) of debris flow for six of the drainage basins in the burn area in response to a 10-year recurrence interval design storm. Volumetric predictions suggest that debris flows that occur may entrain a significant volume of material, with 8 of the 14 basins identified as having potential debris-flow volumes greater than 100,000 cubic meters. These results suggest there is a high likelihood of significant debris-flow hazard within and downstream of the burn area for nearby populations, infrastructure, and wildlife and water resources. Given these findings, we recommend that residents, emergency managers, and public works departments pay close attention to weather forecasts and National Weather Service–issued Debris Flow and Flash Flood Outlooks, Watches and Warnings and that residents adhere to any evacuation orders.
NASA Astrophysics Data System (ADS)
Contreras Vargas, M. T.; Escauriaza, C. R.; Westerink, J. J.
2017-12-01
In recent years, the occurrence of flash floods and landslides produced by hydrometeorological events in Andean watersheds has had devastating consequences in urban and rural areas near the mountains. Two factors have hindered the hazard forecast in the region: 1) The spatial and temporal variability of climate conditions, which reduce the time range that the storm features can be predicted; and 2) The complexity of the basin morphology that characterizes the Andean region, and increases the velocity and the sediment transport capacity of flows that reach urbanized areas. Hydrodynamic models have become key tools to assess potential flood risks. Two-dimensional (2D) models based on the shallow-water equations are widely used to determine with high accuracy and resolution, the evolution of flow depths and velocities during floods. However, the high-computational requirements and long computational times have encouraged research to develop more efficient methodologies for predicting the flood propagation on real time. Our objective is to develop new surrogate models (i.e. metamodeling) to quasi-instantaneously evaluate floods propagation in the Andes foothills. By means a small set of parameters, we define storms for a wide range of meteorological conditions. Using a 2D hydrodynamic model coupled in mass and momentum with the sediment concentration, we compute on high-fidelity the propagation of a flood set. Results are used as a database to perform sophisticated interpolation/regression, and approximate efficiently the flow depth and velocities in critical points during real storms. This is the first application of surrogate models to evaluate flood propagation in the Andes foothills, improving the efficiency of flood hazard prediction. The model also opens new opportunities to improve early warning systems, helping decision makers to inform citizens, enhancing the reslience of cities near mountain regions. This work has been supported by CONICYT/FONDAP grant 15110017, and by the Vice Chancellor of Research of the Pontificia Universidad Catolica de Chile, through the Research Internationalization Grant, PUC1566 funded by MINEDUC.
Yeh, Hsin-Chih; Jan, Hau-Chern; Wu, Wen-Jeng; Li, Ching-Chia; Li, Wei-Ming; Ke, Hung-Lung; Huang, Shu-Pin; Liu, Chia-Chu; Lee, Yung-Chin; Yang, Sheau-Fang; Liang, Peir-In; Huang, Chun-Nung
2015-01-01
To investigate the impact of preoperative hydronephrosis and flank pain on prognosis of patients with upper tract urothelial carcinoma. In total, 472 patients with upper tract urothelial carcinoma managed by radical nephroureterectomy were included from Kaohsiung Medical University Hospital Healthcare System. Clinicopathological data were collected retrospectively for analysis. The significance of hydronephrosis, especially when combined with flank pain, and other relevant factors on overall and cancer-specific survival were evaluated. Of the 472 patients, 292 (62%) had preoperative hydronephrosis and 121 (26%) presented with flank pain. Preoperative hydronephrosis was significantly associated with age, hematuria, flank pain, tumor location, and pathological tumor stage. Concurrent presence of hydronephrosis and flank pain was a significant predictor of non-organ-confined disease (multivariate-adjusted hazard ratio = 2.10, P = 0.025). Kaplan-Meier analysis showed significantly poorer overall and cancer-specific survival in patients with preoperative hydronephrosis (P = 0.005 and P = 0.026, respectively) and in patients with flank pain (P < 0.001 and P = 0.001, respectively) than those without. However, only simultaneous hydronephrosis and flank pain independently predicted adverse outcome (hazard ratio = 1.98, P = 0.016 for overall survival and hazard ratio = 1.87, P = 0.036 for and cancer-specific survival, respectively) in multivariate Cox proportional hazards models. In addition, concurrent presence of hydronephrosis and flank pain was also significantly predictive of worse survival in patient with high grade or muscle-invasive disease. Notably, there was no difference in survival between patients with hydronephrosis but devoid of flank pain and those without hydronephrosis. Concurrent preoperative presence of hydronephrosis and flank pain predicted non-organ-confined status of upper tract urothelial carcinoma. When accompanied with flank pain, hydronephrosis represented an independent predictor for worse outcome in patients with upper tract urothelial carcinoma.
Voss, Jesse S; Iqbal, Seher; Jenkins, Sarah M; Henry, Michael R; Clayton, Amy C; Jett, James R; Kipp, Benjamin R; Halling, Kevin C; Maldonado, Fabien
2014-01-01
Studies have shown that fluorescence in situ hybridization (FISH) testing increases lung cancer detection on cytology specimens in peripheral nodules. The goal of this study was to determine whether a predictive model using clinical features and routine cytology with FISH results could predict lung malignancy after a nondiagnostic bronchoscopic evaluation. Patients with an indeterminate peripheral lung nodule that had a nondiagnostic bronchoscopic evaluation were included in this study (N = 220). FISH was performed on residual bronchial brushing cytology specimens diagnosed as negative (n = 195), atypical (n = 16), or suspicious (n = 9). FISH results included hypertetrasomy (n = 30) and negative (n = 190). Primary study end points included lung cancer status along with time to diagnosis of lung cancer or date of last clinical follow-up. Hazard ratios (HRs) were calculated using Cox proportional hazards regression model analyses, and P values < .05 were considered statistically significant. The mean age of the 220 patients was 66.7 years (range, 35-91), and most (58%) were men. Most patients (79%) were current or former smokers with a mean pack year history of 43.2 years (median, 40; range, 1-200). After multivariate analysis, hypertetrasomy FISH (HR = 2.96, P < .001), pack years (HR = 1.03 per pack year up to 50, P = .001), age (HR = 1.04 per year, P = .02), atypical or suspicious cytology (HR = 2.02, P = .04), and nodule spiculation (HR = 2.36, P = .003) were independent predictors of malignancy over time and were used to create a prediction model (C-statistic = 0.78). These results suggest that this multivariate model including test results and clinical features may be useful following a nondiagnostic bronchoscopic examination. © 2013.
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
NASA Astrophysics Data System (ADS)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.
In order to predict the margin between the dose needed for adverse chemical effects and actual human exposure rates, data on hazard, exposure, and toxicokinetics are needed. In vitro methods, biomonitoring, and mathematical modeling have provided initial estimates for many extant...
Interspecies Correlation Estimation (ICE) models predict supplemental toxicity data for SSDs
Species sensitivity distributions (SSD) require a large number of toxicity values for a diversity of taxa to define a hazard level protective of multiple species. For most chemicals, measured toxicity data are limited to a few standard test species that are unlikely to adequately...
Dispersion model studies for Space Shuttle environmental effects activities
NASA Technical Reports Server (NTRS)
1981-01-01
The NASA/MSFC REED computer code was developed for predicting concentrations, dosage, and deposition downwind from rocket vehicle launches. The calculation procedures and results of nine studies using the code are presented. Topics include plume expansion, hydrazine concentrations, and hazard calculations for postulated fuel spills.
Pappenberger, F; Jendritzky, G; Staiger, H; Dutra, E; Di Giuseppe, F; Richardson, D S; Cloke, H L
2015-03-01
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single 'deterministic' forecasts. Here, the UTCI is computed on a global scale, which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
NASA Astrophysics Data System (ADS)
Rosdi, M. A. H. M.; Othman, A. N.; Zubir, M. A. M.; Latif, Z. A.; Yusoff, Z. M.
2017-10-01
Sinkhole is not classified as new phenomenon in this country, especially surround Klang Valley. Since 1968, the increasing numbers of sinkhole incident have been reported in Kuala Lumpur and the vicinity areas. As the results, it poses a serious threat for human lives, assets and structure especially in the capital city of Malaysia. Therefore, a Sinkhole Hazard Model (SHM) was generated with integration of GIS framework by applying Analytical Hierarchical Process (AHP) technique in order to produced sinkhole susceptibility hazard map for the particular area. Five consecutive parameters for main criteria each categorized by five sub classes were selected for this research which is Lithology (LT), Groundwater Level Decline (WLD), Soil Type (ST), Land Use (LU) and Proximity to Groundwater Wells (PG). A set of relative weights were assigned to each inducing factor and computed through pairwise comparison matrix derived from expert judgment. Lithology and Groundwater Level Decline has been identified gives the highest impact to the sinkhole development. A sinkhole susceptibility hazard zones was classified into five prone areas namely very low, low, moderate, high and very high hazard. The results obtained were validated with thirty three (33) previous sinkhole inventory data. This evaluation shows that the model indicates 64 % and 21 % of the sinkhole events fall within high and very high hazard zones respectively. Based on this outcome, it clearly represents that AHP approach is useful to predict natural disaster such as sinkhole hazard.
The prediction of the flash point for binary aqueous-organic solutions.
Liaw, Horng-Jang; Chiu, Yi-Yu
2003-07-18
A mathematical model, which may be used for predicting the flash point of aqueous-organic solutions, has been proposed and subsequently verified by experimentally-derived data. The results reveal that this model is able to precisely predict the flash point over the entire composition range of binary aqueous-organic solutions by way of utilizing the flash point data pertaining to the flammable component. The derivative of flash point with respect to composition (solution composition effect upon flash point) can be applied to process safety design/operation in order to identify as to whether the dilution of a flammable liquid solution with water is effective in reducing the fire and explosion hazard of the solution at a specified composition. Such a derivative equation was thus derived based upon the flash point prediction model referred to above and then verified by the application of experimentally-derived data.
Rodríguez-Cano, Rubén; López-Durán, Ana; Martínez-Vispo, Carmela; Martínez, Úrsula; Fernández Del Río, Elena; Becoña, Elisardo
2016-12-01
Diverse studies have found a relation between alcohol consumption and smoking relapse. Few studies have analyzed the relation of smoking relapse with pretreatment alcohol consumption and gender differences. The main purpose of this study is to analyze the influence of alcohol consumption in smoking relapse over 12 months (3-, 6-, and 12-months follow-up) and to determine possible gender differences. The sample included 374 smokers who quit smoking by participating in a psychological smoking cessation treatment. We assessed hazardous pretreatment alcohol drinking (AUDIT), cigarette consumption (FTND; number of cigarettes) and sociodemographic variables. Higher scores on hazardous pretreatment alcohol drinking predict smoking relapse at 3-, 6-, and 12-months after smoking cessation. In males, higher scores on hazardous pretreatment alcohol drinking predict relapse at 6 and at 12 months. In females, higher scores on hazardous pretreatment alcohol drinking predict tobacco relapse at 3 months. Hazardous pretreatment alcohol drinking predicts relapse at all intervals after smoking cessation (3-, 6-, and 12-months follow-up). However, the influence of hazardous pretreatment alcohol drinking on smoking relapse differs as a function of gender, as it is a short-term predictor in women (3 months) and a long-term predictor in men (6 and 12 months). Copyright © 2016 Elsevier Inc. All rights reserved.
Nishioka, Shinta; Okamoto, Takatsugu; Takayama, Masako; Urushihara, Maki; Watanabe, Misuzu; Kiriya, Yumiko; Shintani, Keiko; Nakagomi, Hiromi; Kageyama, Noriko
2017-08-01
Whether malnutrition risk correlates with recovery of swallowing function of convalescent stroke patients is unknown. This study was conducted to clarify whether malnutrition risks predict achievement of full oral intake in convalescent stroke patients undergoing enteral nutrition. We conducted a secondary analysis of 466 convalescent stroke patients, aged 65 years or over, who were undergoing enteral nutrition. Patients were extracted from the "Algorithm for Post-stroke Patients to improve oral intake Level; APPLE" study database compiled at the Kaifukuki (convalescent) rehabilitation wards. Malnutrition risk was determined by the Geriatric Nutritional Risk Index as follows: severe (<82), moderate (82 to <92), mild (92 to <98), and no malnutrition risks (≥98). Swallowing function was assessed by Fujishima's swallowing grade (FSG) on admission and discharge. The primary outcome was achievement of full oral intake, indicated by FSG ≥ 7. Binary logistic regression analysis was performed to identify predictive factors, including malnutrition risk, for achieving full oral intake. Estimated hazard risk was computed by Cox's hazard model. Of the 466 individuals, 264 were ultimately included in this study. Participants with severe malnutrition risk showed a significantly lower proportion of achievement of full oral intake than lower severity groups (P = 0.001). After adjusting for potential confounders, binary logistic regression analysis showed that patients with severe malnutrition risk were less likely to achieve full oral intake (adjusted odds ratio: 0.232, 95% confidence interval [95% CI]: 0.047-1.141). Cox's proportional hazard model revealed that severe malnutrition risk was an independent predictor of full oral intake (adjusted hazard ratio: 0.374, 95% CI: 0.166-0.842). Compared to patients who did not achieve full oral intake, patients who achieved full oral intake had significantly higher energy intake, but there was no difference in protein intake and weight change. Severe malnutrition risk independently predicts the achievement of full oral intake in convalescent stroke patients undergoing enteral nutrition. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Rainfall and Extratropical Transition of Tropical Cyclones: Simulation, Prediction, and Projection
NASA Astrophysics Data System (ADS)
Liu, Maofeng
Rainfall and associated flood hazards are one of the major threats of tropical cyclones (TCs) to coastal and inland regions. The interaction of TCs with extratropical systems can lead to enhanced precipitation over enlarged areas through extratropical transition (ET). To achieve a comprehensive understanding of rainfall and ET associated with TCs, this thesis conducts weather-scale analyses by focusing on individual storms and climate-scale analyses by focusing on seasonal predictability and changing properties of climatology under global warming. The temporal and spatial rainfall evolution of individual storms, including Hurricane Irene (2011), Hurricane Hanna (2008), and Hurricane Sandy (2012), is explored using the Weather Research and Forecast (WRF) model and a variety of hydrometeorological datasets. ET and Orographic mechanism are two key players in the rainfall distribution of Irene over regions experiencing most severe flooding. The change of TC rainfall under global warming is explored with the Forecast-oriented Low Ocean Resolution (FLOR) climate model under representative concentration pathway (RCP) 4.5 scenario. Despite decreased TC frequency, FLOR projects increased landfalling TC rainfall over most regions of eastern United States, highlighting the risk of increased flood hazards. Increased storm rain rate is an important player of increased landfalling TC rainfall. A higher atmospheric resolution version of FLOR (HiFLOR) model projects increased TC rainfall at global scales. The increase of TC intensity and environmental water vapor content scaled by the Clausius-Clapeyron relation are two key factors that explain the projected increase of TC rainfall. Analyses on the simulation, prediction, and projection of the ET activity with FLOR are conducted in the North Atlantic. FLOR model exhibits good skills in simulating many aspects of present-day ET climatology. The 21st-century-projection under RCP4.5 scenario demonstrates the dominant role of ET events on the projected increase of TC frequency in the eastern North Atlantic, highlighting increased exposure of the northeastern United States and Western Europe to storm hazards. Retrospective seasonal forecast experiments demonstrate the skill of HiFLOR in predicting basinwide and regional ET frequency. This skill, however, is not seen in the seasonal prediction of ET rate. More work on the property of signal-to-noise ratio of ET rate is needed.
Hazard assessment through hybrid in vitro / in silico approach: The case of zearalenone.
Ehrlich, Veronika A; Dellafiora, Luca; Mollergues, Julie; Dall'Asta, Chiara; Serrant, Patrick; Marin-Kuan, Maricel; Lo Piparo, Elena; Schilter, Benoit; Cozzini, Pietro
2015-01-01
Within the framework of reduction, refinement and replacement of animal experiments, new approaches for identification and characterization of chemical hazards have been developed. Grouping and read across has been promoted as a most promising alternative approach. It uses existing toxicological information on a group of chemicals to make predictions on the toxicity of uncharacterized ones. In the present work, the feasibility of applying in vitro and in silico techniques to group chemicals for read across was studied using the food mycotoxin zearalenone (ZEN) and metabolites as a case study. ZEN and its reduced metabolites are known to act through activation of the estrogen receptor α (ERα). The ranking of their estrogenic potencies appeared highly conserved across test systems including binding, in vitro and in vivo assays. This data suggests that activation of ERα may play a role in the molecular initiating event (MIE) and be predictive of adverse effects and provides the rationale to model receptor-binding for hazard identification. The investigation of receptor-ligand interactions through docking simulation proved to accurately rank estrogenic potencies of ZEN and reduced metabolites, showing the suitability of the model to address estrogenic potency for this group of compounds. Therefore, the model was further applied to biologically uncharacterized, commercially unavailable, oxidized ZEN metabolites (6α-, 6β-, 8α-, 8β-, 13- and 15-OH-ZEN). Except for 15-OH-ZEN, the data indicate that in general, the oxidized metabolites would be considered a lower estrogenic concern than ZEN and reduced metabolites.
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, Mariano; Assatourians, Karen; Jimenez, Maria-Jose
2018-01-01
Extreme natural hazard events have the potential to cause significant disruption to critical infrastructure (CI) networks. Among them, earthquakes represent a major threat as sudden-onset events with limited, if any, capability of forecast, and high damage potential. In recent years, the increased exposure of interdependent systems has heightened concern, motivating the need for a framework for the management of these increased hazards. The seismic performance level and resilience of existing non-nuclear CIs can be analyzed by identifying the ground motion input values leading to failure of selected key elements. Main interest focuses on the ground motions exceeding the original design values, which should correspond to low probability occurrence. A seismic hazard methodology has been specifically developed to consider low-probability ground motions affecting elongated CI networks. The approach is based on Monte Carlo simulation, which allows for building long-duration synthetic earthquake catalogs to derive low-probability amplitudes. This approach does not affect the mean hazard values and allows obtaining a representation of maximum amplitudes that follow a general extreme-value distribution. This facilitates the analysis of the occurrence of extremes, i.e., very low probability of exceedance from unlikely combinations, for the development of, e.g., stress tests, among other applications. Following this methodology, extreme ground-motion scenarios have been developed for selected combinations of modeling inputs including seismic activity models (source model and magnitude-recurrence relationship), ground motion prediction equations (GMPE), hazard levels, and fractiles of extreme ground motion. The different results provide an overview of the effects of different hazard modeling inputs on the generated extreme motion hazard scenarios. This approach to seismic hazard is at the core of the risk analysis procedure developed and applied to European CI transport networks within the framework of the European-funded INFRARISK project. Such an operational seismic hazard framework can be used to provide insight in a timely manner to make informed risk management or regulating further decisions on the required level of detail or on the adoption of measures, the cost of which can be balanced against the benefits of the measures in question.
Integrative Chemical-Biological Read-Across Approach for Chemical Hazard Classification
Low, Yen; Sedykh, Alexander; Fourches, Denis; Golbraikh, Alexander; Whelan, Maurice; Rusyn, Ivan; Tropsha, Alexander
2013-01-01
Traditional read-across approaches typically rely on the chemical similarity principle to predict chemical toxicity; however, the accuracy of such predictions is often inadequate due to the underlying complex mechanisms of toxicity. Here we report on the development of a hazard classification and visualization method that draws upon both chemical structural similarity and comparisons of biological responses to chemicals measured in multiple short-term assays (”biological” similarity). The Chemical-Biological Read-Across (CBRA) approach infers each compound's toxicity from those of both chemical and biological analogs whose similarities are determined by the Tanimoto coefficient. Classification accuracy of CBRA was compared to that of classical RA and other methods using chemical descriptors alone, or in combination with biological data. Different types of adverse effects (hepatotoxicity, hepatocarcinogenicity, mutagenicity, and acute lethality) were classified using several biological data types (gene expression profiling and cytotoxicity screening). CBRA-based hazard classification exhibited consistently high external classification accuracy and applicability to diverse chemicals. Transparency of the CBRA approach is aided by the use of radial plots that show the relative contribution of analogous chemical and biological neighbors. Identification of both chemical and biological features that give rise to the high accuracy of CBRA-based toxicity prediction facilitates mechanistic interpretation of the models. PMID:23848138
Li, Jiejie; Wang, Yilong; Lin, Jinxi; Wang, David; Wang, Anxin; Zhao, Xingquan; Liu, Liping; Wang, Chunxue; Wang, Yongjun
2015-07-01
Elevated soluble CD40 ligand (sCD40L) was shown to be related to cardiovascular events, but the role of sCD40L in predicting recurrent stroke remains unclear. Baseline sCD40L levels were measured in 3044 consecutive patients with acute minor stroke and transient ischemic attack, who had previously been enrolled in the Clopidogrel in High-Risk Patients With Acute Nondisabling Cerebrovascular Events (CHANCE) trial. Cox proportional-hazards model was used to assess the association of sCD40L with recurrent stroke. Patients in the top tertile of sCD40L levels had increased risk of recurrent stroke comparing with those in the bottom tertile, after adjusted for conventional confounding factors (hazard ratio, 1.49; 95% confidence interval, 1.11-2.00; P=0.008). The patients with elevated levels of both sCD40L and high-sensitive C-reactive protein also had increased risk of recurrent stroke (hazard ratio, 1.81; 95% confidence interval, 1.23-2.68; P=0.003). Elevated sCD40L levels independently predict recurrent stroke in patients with minor stroke and transient ischemic attack. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00979589. © 2015 American Heart Association, Inc.
NEL, ANDRE; XIA, TIAN; MENG, HUAN; WANG, XIANG; LIN, SIJIE; JI, ZHAOXIA; ZHANG, HAIYUAN
2014-01-01
Conspectus The production of engineered nanomaterials (ENMs) is a scientific breakthrough in material design and the development of new consumer products. While the successful implementation of nanotechnology is important for the growth of the global economy, we also need to consider the possible environmental health and safety (EHS) impact as a result of the novel physicochemical properties that could generate hazardous biological outcomes. In order to assess ENM hazard, reliable and reproducible screening approaches are needed to test the basic materials as well as nano-enabled products. A platform is required to investigate the potentially endless number of bio-physicochemical interactions at the nano/bio interface, in response to which we have developed a predictive toxicological approach. We define a predictive toxicological approach as the use of mechanisms-based high throughput screening in vitro to make predictions about the physicochemical properties of ENMs that may lead to the generation of pathology or disease outcomes in vivo. The in vivo results are used to validate and improve the in vitro high throughput screening (HTS) and to establish structure-activity relationships (SARs) that allow hazard ranking and modeling by an appropriate combination of in vitro and in vivo testing. This notion is in agreement with the landmark 2007 report from the US National Academy of Sciences, “Toxicity Testing in the 21st Century: A Vision and a Strategy” (http://www.nap.edu/catalog.php?record_id=11970), which advocates increased efficiency of toxicity testing by transitioning from qualitative, descriptive animal testing to quantitative, mechanistic and pathway-based toxicity testing in human cells or cell lines using high throughput approaches. Accordingly, we have implemented HTS approaches to screen compositional and combinatorial ENM libraries to develop hazard ranking and structure-activity relationships that can be used for predicting in vivo injury outcomes. This predictive approach allows the bulk of the screening analysis and high volume data generation to be carried out in vitro, following which limited, but critical, validation studies are carried out in animals or whole organisms. Risk reduction in the exposed human or environmental populations can then focus on limiting or avoiding exposures that trigger these toxicological responses as well as implementing safer design of potentially hazardous ENMs. In this communication, we review the tools required for establishing predictive toxicology paradigms to assess inhalation and environmental toxicological scenarios through the use of compositional and combinatorial ENM libraries, mechanism-based HTS assays, hazard ranking and development of nano-SARs. We will discuss the major injury paradigms that have emerged based on specific ENM properties, as well as describing the safer design of ZnO nanoparticles based on characterization of dissolution chemistry as a major predictor of toxicity. PMID:22676423
Ramirez, Jason J; Olin, Cecilia C; Lindgren, Kristen P
2017-09-01
Two variations of the Implicit Association Test (IAT), the Drinking Identity IAT and the Alcohol Identity IAT, assess implicit associations held in memory between one's identity and alcohol-related constructs. Both have been shown to predict numerous drinking outcomes, but these IATs have never been directly compared to one another. The purpose of this study was to compare these IATs and evaluate their incremental predictive validity. US undergraduate students (N=64, 50% female, mean age=21.98years) completed the Drinking Identity IAT, the Alcohol Identity IAT, an explicit measure of drinking identity, as well as measures of typical alcohol consumption and hazardous drinking. When evaluated in separate regression models that controlled for explicit drinking identity, results indicated that the Drinking Identity IAT and the Alcohol Identity IAT were significant, positive predictors of typical alcohol consumption, and that the Drinking Identity IAT, but not the Alcohol Identity IAT, was a significant predictor of hazardous drinking. When evaluated in the same regression models, the Drinking Identity IAT, but not the Alcohol Identity IAT, was significantly associated with typical and hazardous drinking. These results suggest that the Drinking Identity IAT and Alcohol Identity IAT are related but not redundant. Moreover, given that the Drinking Identity IAT, but not the Alcohol Identity IAT, incrementally predicted variance in drinking outcomes, identification with drinking behavior and social groups, as opposed to identification with alcohol itself, may be an especially strong predictor of drinking outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Flooding Fragility Experiments and Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Tahhan, Antonio; Muchmore, Cody
2016-09-01
This report describes the work that has been performed on flooding fragility, both the experimental tests being carried out and the probabilistic fragility predictive models being produced in order to use the text results. Flooding experiments involving full-scale doors have commenced in the Portal Evaluation Tank. The goal of these experiments is to develop a full-scale component flooding experiment protocol and to acquire data that can be used to create Bayesian regression models representing the fragility of these components. This work is in support of the Risk-Informed Safety Margin Characterization (RISMC) Pathway external hazards evaluation research and development.
Study of indoor radon distribution using measurements and CFD modeling.
Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K
2014-10-01
Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Quintana, Rolando
2003-01-01
The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.
Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.
Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua
2017-03-01
Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The results suggest that the quantitative morphological features of tumor pathological images predict prognosis in patients with lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Preclinical Alzheimer's disease and longitudinal driving decline.
Roe, Catherine M; Babulal, Ganesh M; Head, Denise M; Stout, Sarah H; Vernon, Elizabeth K; Ghoshal, Nupur; Garland, Brad; Barco, Peggy P; Williams, Monique M; Johnson, Ann; Fierberg, Rebecca; Fague, M Scot; Xiong, Chengjie; Mormino, Elizabeth; Grant, Elizabeth A; Holtzman, David M; Benzinger, Tammie L S; Fagan, Anne M; Ott, Brian R; Carr, David B; Morris, John C
2017-01-01
Links between preclinical AD and driving difficulty onset would support the use of driving performance as an outcome in primary and secondary prevention trials among older adults (OAs). We examined whether AD biomarkers predicted the onset of driving difficulties among OAs. 104 OAs (65+ years) with normal cognition took part in biomarker measurements, a road test, clinical and psychometric batteries and self-reported their driving habits. Higher values of CSF tau/Aβ 42 and ptau 181 /Aβ 42 ratios, but not uptake on PIB amyloid imaging (p=.12), predicted time to a rating of Marginal or Fail on the driving test using Cox proportional hazards models. Hazards ratios (95% confidence interval) were 5.75 (1.70-19.53), p=.005 for CSF tau/Aβ 42 ; 6.19 (1.75-21.88) and p=.005 for CSF ptau 181 /Aβ 42 . Preclinical AD predicted time to receiving a Marginal or Fail rating on an on-road driving test. Driving performance shows promise as a functional outcome in AD prevention trials.
NASA Astrophysics Data System (ADS)
Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.
2014-12-01
In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.
Donham, K J; Reynolds, S J; Whitten, P; Merchant, J A; Burmeister, L; Popendorf, W J
1995-03-01
Human respiratory health hazards for people working in livestock confinement buildings have been recognized since 1974. However, before comprehensive control programs can be implemented, more knowledge is needed of specific hazardous substances present in the air of these buildings, and at what concentrations they are harmful. Therefore, a medical epidemiological and exposure-response study was conducted on 207 swine producers using intensive housing systems (108 farms). Dose-response relationships between pulmonary function and exposures are reported here. Positive correlations were seen between change in pulmonary function over a work period and exposure to total dust, respirable dust, ammonia, respirable endotoxin, and the interactions of age-of-producer and dust exposure and years-of-working-in-the-facility and dust exposure. Relationships between baseline pulmonary function and exposures were not strong and therefore, not pursued in this study. The correlations between exposure and response were stronger after 6 years of exposure. Multiple regression models were used to identify total dust and ammonia as the two primary environmental predictors of pulmonary function decrements over a work period. The regression models were then used to determine exposure concentrations related to pulmonary function decrements suggestive of a health hazard. Total dust concentrations > or = 2.8 mg/m3 were predictive of a work period decrement of > or = 10% in FEV1. Ammonia concentrations of > or = 7.5 ppm were predictive of a > or = 3% work period decrement in FEV1. These predictive concentrations were similar to a previous dose-response study, which suggested 2.5 mg/m3 of total dust and 7 ppm of NH3 were associated with significant work period decrements. Therefore, dust > or = 2.8 mg/m3 and ammonia > or = 7.5 ppm should be considered reasonable evidence for guidelines regarding hazardous exposure concentrations in this work environment.
Hazard Assessment in a Big Data World
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir; Nekrasova, Anastasia
2017-04-01
Open data in a Big Data World provides unprecedented opportunities for enhancing scientific studies and better understanding of the Earth System. At the same time, it opens wide avenues for deceptive associations in inter- and transdisciplinary data misleading to erroneous predictions, which are unacceptable for implementation. Even the advanced tools of data analysis may lead to wrong assessments when inappropriately used to describe the phenomenon under consideration. A (self-) deceptive conclusion could be avoided by verification of candidate models in experiments on empirical data and in no other way. Seismology is not an exception. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in early history of instrumental seismology can be proved erroneous when subjected to objective hypothesis testing. In many cases of seismic hazard assessment (SHA), either probabilistic or deterministic, term-less or short-term, the claims of a high potential of a model forecasts are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers, which situation creates numerous deception points and resulted controversies. So far, most, if not all, the standard probabilistic methods to assess seismic hazard and associated risks are based on subjective, commonly unrealistic, and even erroneous assumptions about seismic recurrence and none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Accurate testing against real observations must be done in advance claiming seismically hazardous areas and/or times. The set of errors of the first and second kind in such a comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a user-defined cost-benefit function. The information obtained in testing experiments may supply us with realistic estimates of confidence and accuracy of SHA predictions. If proved reliable, but not necessarily perfect, forecast/prediction related recommendations on the level of risks in regard to engineering design, insurance, and emergency management can be used for efficient decision making.
TEM PSHA2015 Reliability Assessment
NASA Astrophysics Data System (ADS)
Lee, Y.; Wang, Y. J.; Chan, C. H.; Ma, K. F.
2016-12-01
The Taiwan Earthquake Model (TEM) developed a new probabilistic seismic hazard analysis (PSHA) for determining the probability of exceedance (PoE) of ground motion over a specified period in Taiwan. To investigate the adequacy of the seismic source parameters adopted in the 2015 PSHA of the TEM (TEM PSHA2015), we conducted several tests of the seismic source models. The observed maximal peak ground acceleration (PGA) of the ML > 4.0 mainshocks in the 23-year data period of 1993-2015 were used to test the predicted PGA of PSHA from the areal and subduction zone sources with the time-independent Poisson assumption. This comparison excluded the observations from 1999 Chi-Chi earthquake, as this was the only earthquake associated with the identified active fault in this past 23 years. We used tornado diagrams to analyze the sensitivities of these source parameters to the ground motion values of the PSHA. This study showed that the predicted PGA for a 63% PoE in the 23-year period corresponded to the empirical PGA and the predicted numbers of PGA exceedances to a threshold value 0.1g close to the observed numbers, confirming the parameter applicability for the areal and subduction zone sources. We adopted the disaggregation analysis from a hazard map to determine the contribution of the individual seismic sources to hazard for six metropolitan cities in Taiwan. The sensitivity tests of the seismogenic structure parameters indicated that the slip rate and maximum magnitude are dominant factors for the TEM PSHA2015. For densely populated faults in SW Taiwan, maximum magnitude is more sensitive than the slip rate, giving the concern on the possible multiple fault segments rupture with larger magnitude in this area, which was not yet considered in TEM PSHA2015. The source category disaggregation also suggested that special attention is necessary for subduction zone earthquakes for long-period shaking seismic hazards in Northern Taiwan.
Plenary: Progress in Regional Landslide Hazard Assessment—Examples from the USA
Baum, Rex L.; Schulz, William; Brien, Dianne L.; Burns, William J.; Reid, Mark E.; Godt, Jonathan W.
2014-01-01
Landslide hazard assessment at local and regional scales contributes to mitigation of landslides in developing and densely populated areas by providing information for (1) land development and redevelopment plans and regulations, (2) emergency preparedness plans, and (3) economic analysis to (a) set priorities for engineered mitigation projects and (b) define areas of similar levels of hazard for insurance purposes. US Geological Survey (USGS) research on landslide hazard assessment has explored a range of methods that can be used to estimate temporal and spatial landslide potential and probability for various scales and purposes. Cases taken primarily from our work in the U.S. Pacific Northwest illustrate and compare a sampling of methods, approaches, and progress. For example, landform mapping using high-resolution topographic data resulted in identification of about four times more landslides in Seattle, Washington, than previous efforts using aerial photography. Susceptibility classes based on the landforms captured 93 % of all historical landslides (all types) throughout the city. A deterministic model for rainfall infiltration and shallow landslide initiation, TRIGRS, was able to identify locations of 92 % of historical shallow landslides in southwest Seattle. The potentially unstable areas identified by TRIGRS occupied only 26 % of the slope areas steeper than 20°. Addition of an unsaturated infiltration model to TRIGRS expands the applicability of the model to areas of highly permeable soils. Replacement of the single cell, 1D factor of safety with a simple 3D method of columns improves accuracy of factor of safety predictions for both saturated and unsaturated infiltration models. A 3D deterministic model for large, deep landslides, SCOOPS, combined with a three-dimensional model for groundwater flow, successfully predicted instability in steep areas of permeable outwash sand and topographic reentrants. These locations are consistent with locations of large, deep, historically active landslides. For an area in Seattle, a composite of the three maps illustrates how maps produced by different approaches might be combined to assess overall landslide potential. Examples from Oregon, USA, illustrate how landform mapping and deterministic analysis for shallow landslide potential have been adapted into standardized methods for efficiently producing detailed landslide inventory and shallow landslide susceptibility maps that have consistent content and format statewide.
W. E. Dietrich; J. McKean; D. Bellugi; T. Perron
2007-01-01
Shallow landslides on steep slopes often mobilize as debris flows. The size of the landslide controls the initial size of the debris flows, defines the sediment discharge to the channel network, affects rates and scales of landform development, and influences the relative hazard potential. Currently the common practice in digital terrain-based models is to set the...
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
Reliable assessment of the hazards or risks arising from groundwater contamination and the design of effective means of rehabilitation of contaminated sites requires the capability to predict the movement and fate of dissolved solutes in groundwater. The modeling of metal transp...
Reliable assessment of the hazards or risks arising from groundwater contamination and the design of effective means of rehabilitation of contaminated sites requires the capability to predict the movement and fate of dissolved solutes in groundwater. he modeling of metal transpor...
DOT National Transportation Integrated Search
2015-04-01
Safety at intersections is of significant interest to transportation professionals due to the large number of : possible conflicts that occur at those locations. In particular, rural intersections have been recognized as : one of the most hazardous l...
“httk”: EPA’s Tool for High Throughput Toxicokinetics (CompTox CoP)
Thousands of chemicals have been pro?led by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concentr...
AN EMPIRICAL MODEL TO PREDICT STYRENE EMISSIONS FROM FIBER-REINFORCED PLASTICS FABRICATION PROCESSES
Styrene is a designated hazardous air pollutant, per the 1990 Clean Air Act Amendments. It is also a tropospheric ozone precursor. Fiber-reinforced plastics (FRP) fabrication is the primary source of anthropogenic styrene emissions in the United States. This paper describes an em...
Mixture toxicology in the 21st century: Pathway-based concepts and tools
The past decade has witnessed notable evolution of approaches focused on predicting chemical hazards and risks in the absence of empirical data from resource-intensive in vivo toxicity tests. In silico models, in vitro high-throughput toxicity assays, and short-term in vivo tests...
20180312 - Ensemble QSAR Modeling to Predict Multispecies Fish Toxicity Points of Departure (SOT)
Due to the large quantity of new chemicals being developed and potentially introduced into aquatic ecosystems, there is a need to prioritize chemicals with the greatest likelihood of ecological hazard for further research. To this end, a useful in silico estimation of ecotoxicity...
PREDICTING SOIL SORPTION COEFFICIENTS OF ORGANIC CHEMICALS USING A NEURAL NETWORK MODEL
The soil/sediment adsorption partition coefficient normalized to organic carbon (Koc) is extensively used to assess the fate of organic chemicals in hazardous waste sites. Several attempts have been made to estimate the value of Koc from chemical structure ...
Analysis of North Atlantic Tropical Cyclone Intensify Change Using Data Mining
ERIC Educational Resources Information Center
Tang, Jiang
2010-01-01
Tropical cyclones (TC), especially when their intensity reaches hurricane scale, can become a costly natural hazard. Accurate prediction of tropical cyclone intensity is very difficult because of inadequate observations on TC structures, poor understanding of physical processes, coarse model resolution and inaccurate initial conditions, etc. This…
Geophysical Hazards and Preventive Disaster Management of Extreme Natural Events
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Takeuchi, K.
2007-12-01
Geophysical hazard is potentially damaging natural event and/or phenomenon, which may cause the loss of life or injury, property damage, social and economic disruption, or environmental degradation. Extreme natural hazards are a key manifestation of the complex hierarchical nonlinear Earth system. An understanding, accurate modeling and forecasting of the extreme hazards are most important scientific challenges. Several recent extreme natural events (e.g., 2004 Great Indian Ocean Earthquake and Tsunami and the 2005 violent Katrina hurricane) demonstrated strong coupling between solid Earth and ocean, and ocean and atmosphere. These events resulted in great humanitarian tragedies because of a weak preventive disaster management. The less often natural events occur (and the extreme events are rare by definition), the more often the disaster managers postpone the preparedness to the events. The tendency to reduce the funding for preventive disaster management of natural catastrophes is seldom follows the rules of responsible stewardship for future generations neither in developing countries nor in highly developed economies where it must be considered next to malfeasance. Protecting human life and property against earthquake disasters requires an uninterrupted chain of tasks: from (i) understanding of physics of the events, analysis and monitoring, through (ii) interpretation, modeling, hazard assessment, and prediction, to (iii) public awareness, preparedness, and preventive disaster management.
A fast, calibrated model for pyroclastic density currents kinematics and hazard
NASA Astrophysics Data System (ADS)
Esposti Ongaro, Tomaso; Orsucci, Simone; Cornolti, Fulvio
2016-11-01
Multiphase flow models represent valuable tools for the study of the complex, non-equilibrium dynamics of pyroclastic density currents. Particle sedimentation, flow stratification and rheological changes, depending on the flow regime, interaction with topographic obstacles, turbulent air entrainment, buoyancy reversal, and other complex features of pyroclastic currents can be simulated in two and three dimensions, by exploiting efficient numerical solvers and the improved computational capability of modern supercomputers. However, numerical simulations of polydisperse gas-particle mixtures are quite computationally expensive, so that their use in hazard assessment studies (where there is the need of evaluating the probability of hazardous actions over hundreds of possible scenarios) is still challenging. To this aim, a simplified integral (box) model can be used, under the appropriate hypotheses, to describe the kinematics of pyroclastic density currents over a flat topography, their scaling properties and their depositional features. In this work, multiphase flow simulations are used to evaluate integral model approximations, to calibrate its free parameters and to assess the influence of the input data on the results. Two-dimensional numerical simulations describe the generation and decoupling of a dense, basal layer (formed by progressive particle sedimentation) from the dilute transport system. In the Boussinesq regime (i.e., for solid mass fractions below about 0.1), the current Froude number (i.e., the ratio between the current inertia and buoyancy) does not strongly depend on initial conditions and it is consistent to that measured in laboratory experiments (i.e., between 1.05 and 1.2). For higher density ratios (solid mass fraction in the range 0.1-0.9) but still in a relatively dilute regime (particle volume fraction lower than 0.01), numerical simulations demonstrate that the box model is still applicable, but the Froude number depends on the reduced gravity. When the box model is opportunely calibrated with the numerical simulation results, the prediction of the flow runout is fairly accurate and the model predicts a rapid, non-linear decay of the flow kinetic energy (or dynamic pressure) with the distance from the source. The capability of PDC to overcome topographic obstacles can thus be analysed in the framework of the energy-conoid approach, in which the predicted kinetic energy of the flow front is compared with the potential energy jump associated with the elevated topography to derive a condition for blocking. Model results show that, although preferable to the energy-cone, the energy-conoid approach still has some serious limitations, mostly associated with the behaviour of the flow head. Implications of these outcomes are discussed in the context of probabilistic hazard assessment studies, in which a calibrated box model can be used as a fast pyroclastic density current emulator for Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Nesvold, Erika; Greenberg, Adam; Erasmus, Nicolas; Van Heerden, Elmarie; Galache, J. L.; Dahlstrom, Eric; Marchis, Franck
2018-01-01
Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We will present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We will describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.
NASA Astrophysics Data System (ADS)
Nesvold, E. R.; Greenberg, A.; Erasmus, N.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.
2018-05-01
Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Fu, Xia; Liang, Xinling; Song, Li; Huang, Huigen; Wang, Jing; Chen, Yuanhan; Zhang, Li; Quan, Zilin; Shi, Wei
2014-04-01
To develop a predictive model for circuit clotting in patients with continuous renal replacement therapy (CRRT). A total of 425 cases were selected. 302 cases were used to develop a predictive model of extracorporeal circuit life span during CRRT without citrate anticoagulation in 24 h, and 123 cases were used to validate the model. The prediction formula was developed using multivariate Cox proportional-hazards regression analysis, from which a risk score was assigned. The mean survival time of the circuit was 15.0 ± 1.3 h, and the rate of circuit clotting was 66.6 % during 24 h of CRRT. Five significant variables were assigned a predicting score according to the regression coefficient: insufficient blood flow, no anticoagulation, hematocrit ≥0.37, lactic acid of arterial blood gas analysis ≤3 mmol/L and APTT < 44.2 s. The Hosmer-Lemeshow test showed no significant difference between the predicted and actual circuit clotting (R (2) = 0.232; P = 0.301). A risk score that includes the five above-mentioned variables can be used to predict the likelihood of extracorporeal circuit clotting in patients undergoing CRRT.
Assessment and prediction of debris-flow hazards
Wieczorek, Gerald F.; ,
1993-01-01
Study of debris-flow geomorphology and initiation mechanism has led to better understanding of debris-flow processes. This paper reviews how this understanding is used in current techniques for assessment and prediction of debris-flow hazards.
Seismic Hazard Analysis for Armenia and its Surrounding Areas
NASA Astrophysics Data System (ADS)
Klein, E.; Shen-Tu, B.; Mahdyiar, M.; Karakhanyan, A.; Pagani, M.; Weatherill, G.; Gee, R. C.
2017-12-01
The Republic of Armenia is located within the central part of a large, 800 km wide, intracontinental collision zone between the Arabian and Eurasian plates. Active deformation occurs along numerous structures in the form of faulting, folding, and volcanism distributed throughout the entire zone from the Bitlis-Zargos suture belt to the Greater Caucasus Mountains and between the relatively rigid Back Sea and Caspian Sea blocks without any single structure that can be claimed as predominant. In recent years, significant work has been done on mapping active faults, compiling and reviewing historic and paleoseismological studies in the region, especially in Armenia; these recent research contributions have greatly improved our understanding of the seismogenic sources and their characteristics. In this study we performed a seismic hazard analysis for Armenia and its surrounding areas using the latest detailed geological and paleoseismological information on active faults, strain rates estimated from kinematic modeling of GPS data and all available historic earthquake data. The seismic source model uses a combination of characteristic earthquake and gridded seismicity models to take advantage of the detailed knowledge of the known faults while acknowledging the distributed deformation and regional tectonic environment of the collision zone. In addition, the fault model considers earthquake ruptures that include single and multi-segment or fault rupture scenarios with earthquakes that can rupture any part of a multiple segment fault zone. The ground motion model uses a set of ground motion prediction equations (GMPE) selected from a pool of GMPEs based on the assessment of each GMPE against the available strong motion data in the region. The hazard is computed in the GEM's OpenQuake engine. We will present final hazard results and discuss the uncertainties associated with various input data and their impact on the hazard at various locations.
SCEC Earthquake System Science Using High Performance Computing
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.
2008-12-01
The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.
NASA Astrophysics Data System (ADS)
Berge-Thierry, C.; Hollender, F.; Guyonnet-Benaize, C.; Baumont, D.; Ameri, G.; Bollinger, L.
2017-09-01
Seismic analysis in the context of nuclear safety in France is currently guided by a pure deterministic approach based on Basic Safety Rule ( Règle Fondamentale de Sûreté) RFS 2001-01 for seismic hazard assessment, and on the ASN/2/01 Guide that provides design rules for nuclear civil engineering structures. After the 2011 Tohohu earthquake, nuclear operators worldwide were asked to estimate the ability of their facilities to sustain extreme seismic loads. The French licensees then defined the `hard core seismic levels', which are higher than those considered for design or re-assessment of the safety of a facility. These were initially established on a deterministic basis, and they have been finally justified through state-of-the-art probabilistic seismic hazard assessments. The appreciation and propagation of uncertainties when assessing seismic hazard in France have changed considerably over the past 15 years. This evolution provided the motivation for the present article, the objectives of which are threefold: (1) to provide a description of the current practices in France to assess seismic hazard in terms of nuclear safety; (2) to discuss and highlight the sources of uncertainties and their treatment; and (3) to use a specific case study to illustrate how extended source modeling can help to constrain the key assumptions or parameters that impact upon seismic hazard assessment. This article discusses in particular seismic source characterization, strong ground motion prediction, and maximal magnitude constraints, according to the practice of the French Atomic Energy Commission. Due to increases in strong motion databases in terms of the number and quality of the records in their metadata and the uncertainty characterization, several recently published empirical ground motion prediction models are eligible for seismic hazard assessment in France. We show that propagation of epistemic and aleatory uncertainties is feasible in a deterministic approach, as in a probabilistic way. Assessment of seismic hazard in France in the framework of the safety of nuclear facilities should consider these recent advances. In this sense, the opening of discussions with all of the stakeholders in France to update the reference documents (i.e., RFS 2001-01; ASN/2/01 Guide) appears appropriate in the short term.
NASA Astrophysics Data System (ADS)
Greco, Roberto; Pagano, Luca
2017-12-01
To manage natural risks, an increasing effort is being put in the development of early warning systems (EWS), namely, approaches facing catastrophic phenomena by timely forecasting and alarm spreading throughout exposed population. Research efforts aimed at the development and implementation of effective EWS should especially concern the definition and calibration of the interpretative model. This paper analyses the main features characterizing predictive models working in EWS by discussing their aims and their features in terms of model accuracy, evolutionary stage of the phenomenon at which the prediction is carried out and model architecture. Original classification criteria based on these features are developed throughout the paper and shown in their practical implementation through examples of flow-like landslides and earth flows, both of which are characterized by rapid evolution and quite representative of many applications of EWS.
Timmer, Margriet R.; Martinez, Pierre; Lau, Chiu T.; Westra, Wytske M.; Calpe, Silvia; Rygiel, Agnieszka M.; Rosmolen, Wilda D.; Meijer, Sybren L.; ten Kate, Fiebo J.W.; Dijkgraaf, Marcel G.W.; Mallant-Hent, Rosalie C.; Naber, Anton H.J.; van Oijen, Arnoud H.A.M.; Baak, Lubbertus C.; Scholten, Pieter; Böhmer, Clarisse J.M.; Fockens, Paul; Maley, Carlo C.; Graham, Trevor A.; Bergman, Jacques J.G.H.M.; Krishnadath, Kausilia K.
2016-01-01
Objective The risk of developing adenocarcinoma in non-dysplastic Barrett's oesophagus is low and difficult to predict. Accurate tools for risk stratification are needed to increase the efficiency of surveillance. We aimed to develop a prediction model for progression using clinical variables and genetic markers. Methods In a prospective cohort of patients with non-dysplastic Barrett's oesophagus, we evaluated six molecular markers: p16, p53, Her-2/neu, 20q, MYC, and aneusomy by DNA fluorescence in situ hybridisation on brush cytology specimens. Primary study outcomes were the development of high-grade dysplasia or oesophageal adenocarcinoma. The most predictive clinical variables and markers were determined using Cox proportional-hazards models, receiver-operating-characteristic curves and a leave-one-out analysis. Results A total of 428 patients participated (345 men; median age 60 years) with a cumulative follow-up of 2019 patient-years (median 45 months per patient). Of these patients, 22 progressed; nine developed high-grade dysplasia and 13 oesophageal adenocarcinoma. The clinical variables, age and circumferential Barrett's length, and the markers, p16 loss, MYC gain, and aneusomy, were significantly associated with progression on univariate analysis. We defined an ‘Abnormal Marker Count’ that counted abnormalities in p16, MYC and aneusomy, which significantly improved risk prediction beyond using just age and Barrett's length. In multivariate analysis, these three factors identified a high-risk group with an 8.7-fold (95% CI, 2.6 to 29.8) increased hazard ratio compared with the low-risk group, with an area under the curve of 0.76 (95% CI, 0.66 to 0.86). Conclusion A prediction model based on age, Barrett's length, and the markers p16, MYC, and aneusomy determines progression risk in non-dysplastic Barrett's oesophagus. PMID:26104750
NASA Astrophysics Data System (ADS)
Tao, J.; Barros, A. P.
2014-01-01
Debris flows associated with rainstorms are a frequent and devastating hazard in the Southern Appalachians in the United States. Whereas warm-season events are clearly associated with heavy rainfall intensity, the same cannot be said for the cold-season events. Instead, there is a relationship between large (cumulative) rainfall events independently of season, and thus hydrometeorological regime, and debris flows. This suggests that the dynamics of subsurface hydrologic processes play an important role as a trigger mechanism, specifically through soil moisture redistribution by interflow. We further hypothesize that the transient mass fluxes associated with the temporal-spatial dynamics of interflow govern the timing of shallow landslide initiation, and subsequent debris flow mobilization. The first objective of this study is to investigate this relationship. The second objective is to assess the physical basis for a regional coupled flood prediction and debris flow warning system. For this purpose, uncalibrated model simulations of well-documented debris flows in headwater catchments of the Southern Appalachians using a 3-D surface-groundwater hydrologic model coupled with slope stability models are examined in detail. Specifically, we focus on two vulnerable headwater catchments that experience frequent debris flows, the Big Creek and the Jonathan Creek in the Upper Pigeon River Basin, North Carolina, and three distinct weather systems: an extremely heavy summertime convective storm in 2011; a persistent winter storm lasting several days; and a severe winter storm in 2009. These events were selected due to the optimal availability of rainfall observations; availability of detailed field surveys of the landslides shortly after they occurred, which can be used to evaluate model predictions; and because they are representative of events that cause major economic losses in the region. The model results substantiate that interflow is a useful prognostic of conditions necessary for the initiation of slope instability, and should therefore be considered explicitly in landslide hazard assessments. Moreover, the relationships between slope stability and interflow are strongly modulated by the topography and catchment-specific geomorphologic features that determine subsurface flow convergence zones. The three case studies demonstrate the value of coupled prediction of flood response and debris flow initiation potential in the context of developing a regional hazard warning system.
Srinonprasert, V; Chalermsri, C; Aekplakorn, W
2018-05-04
Frailty is a clinical state of increased vulnerability from aging-associated decline. We aimed to determine if a Thai Frailty Index predicted all-cause mortality in community-dwelling older Thais when accounting for age, gender and socioeconomic status. Data of 8195 subjects aged 60 years and over from the Fourth Thai National Health Examination Survey were used to create the Thai Frailty Index by calculating the ratio of accumulated deficits using a cut-off point of 0.25 to define frailty. The associations were explored using Cox proportional hazard models. The mean age of participants was 69.2 years (SD 6.8). The prevalence of frailty was 22.1%. The Thai Frailty Index significantly predicted mortality (hazard ratio = 2.34, 95% CI 2.10-2.61, p < 0.001). The association between frailty and mortality was stronger in males (hazard ratio = 2.71, 95% CI 2.33-3.16). Higher wealth status had a protective effect among non-frail older adults but not among frail ones. In community-dwelling older Thai adults, the Thai Frailty Index demonstrated a high prevalence of frailty and predicted mortality. Frail older Thai adults did not earn the protective effect of reducing mortality with higher socioeconomic status. Maintaining health rather than accumulating wealth may be better for a longer healthier life for older people in middle income countries. Copyright © 2018. Published by Elsevier B.V.
A robust method to forecast volcanic ash clouds
Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin
2012-01-01
Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1989-01-01
Presently there are many opportunities for the application of ceramic materials at elevated temperatures. In the near future ceramic materials are expected to supplant high temperature metal alloys in a number of applications. It thus becomes essential to develop a capability to predict the time-dependent response of these materials. The creep rupture phenomenon is discussed, and a time-dependent reliability model is outlined that integrates continuum damage mechanics principles and Weibull analysis. Several features of the model are presented in a qualitative fashion, including predictions of both reliability and hazard rate. In addition, a comparison of the continuum and the microstructural kinetic equations highlights a strong resemblance in the two approaches.
A data base approach for prediction of deforestation-induced mass wasting events
NASA Technical Reports Server (NTRS)
Logan, T. L.
1981-01-01
A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.
Landslide risk models for decision making.
Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio
2009-11-01
This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.
Sutradhar, Rinku; Atzema, Clare; Seow, Hsien; Earle, Craig; Porter, Joan; Barbera, Lisa
2014-12-01
Although prior studies show the importance of self-reported symptom scores as predictors of cancer survival, most are based on scores recorded at a single point in time. To show that information on repeated assessments of symptom severity improves predictions for risk of death and to use updated symptom information for determining whether worsening of symptom scores is associated with a higher hazard of death. This was a province-based longitudinal study of adult outpatients who had a cancer diagnosis and had assessments of symptom severity. We implemented a time-to-death Cox model with a time-varying covariate for each symptom to account for changing symptom scores over time. This model was compared with that using only a time-fixed (baseline) covariate for each symptom. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive performance of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. This study had 66,112 patients diagnosed with cancer and more than 310,000 assessments of symptoms. The use of repeated assessments of symptom scores improved predictions for risk of death compared with using only baseline symptom scores. Increased pain and fatigue and reduced appetite were the strongest predictors for death. If available, researchers should consider including changing information on symptom scores, as opposed to only baseline information on symptom scores, when examining hazard of death among patients with cancer. Worsening of pain, fatigue, and appetite may be a flag for impending death. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, H.H.M.; Chen, C.H.S.
1990-04-16
An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less
Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments
2011-05-03
journals (N/A for none) Pelletier, J.D., H. Mitasova, R.S. Harmon, and M. Overton, The effects of interdune vegetation changes on eolian dune field...J.D., Controls on the height and spacing of eolian ripples and transverse dunes : A numerical modeling investigation, Geomorphology, 105, 322-333, 2009...R.S. Harmon, and M. Overton, The effects of interdune vegetation changes on eolian dune field evolution: A numerical-modeling case study at Jockey’s
NASA Astrophysics Data System (ADS)
Vilain, J.
Approaches to major hazard assessment and prediction are reviewed. Source term: (phenomenology/modeling of release, influence on early stages of dispersion); dispersion (atmospheric advection, diffusion and deposition, emphasis on dense/cold gases); combustion (flammable clouds and mists covering flash fires, deflagration, transition to detonation; mostly unconfined/partly confined situations); blast formation, propagation, interaction with structures; catastrophic fires (pool fires, torches and fireballs; highly reactive substances) runaway reactions; features of more general interest; toxic substances, excluding toxicology; and dust explosions (phenomenology and protective measures) are discussed.
In silico prediction of drug-induced myelotoxicity by using Naïve Bayes method.
Zhang, Hui; Yu, Peng; Zhang, Teng-Guo; Kang, Yan-Li; Zhao, Xiao; Li, Yuan-Yuan; He, Jia-Hui; Zhang, Ji
2015-11-01
Drug-induced myelotoxicity usually leads to decrease the production of platelets, red cells, and white cells. Thus, early identification and characterization of myelotoxicity hazard in drug development is very necessary. The purpose of this investigation was to develop a prediction model of drug-induced myelotoxicity by using a Naïve Bayes classifier. For comparison, other prediction models based on support vector machine and single-hidden-layer feed-forward neural network methods were also established. Among all the prediction models, the Naïve Bayes classification model showed the best prediction performance, which offered an average overall prediction accuracy of [Formula: see text] for the training set and [Formula: see text] for the external test set. The significant contributions of this study are that we first developed a Naïve Bayes classification model of drug-induced myelotoxicity adverse effect using a larger scale dataset, which could be employed for the prediction of drug-induced myelotoxicity. In addition, several important molecular descriptors and substructures of myelotoxic compounds have been identified, which should be taken into consideration in the design of new candidate compounds to produce safer and more effective drugs, ultimately reducing the attrition rate in later stages of drug development.
Model predictions of ocular injury from 1315-nm laser light
NASA Astrophysics Data System (ADS)
Polhamus, Garrett D.; Zuclich, Joseph A.; Cain, Clarence P.; Thomas, Robert J.; Foltz, Michael
2003-06-01
With the advent of future weapons systems that employ high energy lasers, the 1315 nm wavelength will present a new laser safety hazard to the armed forces. Experiments in non-human primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular and retinal lesions, as a function of pulse duration and spot size at the cornea. To improve our understanding of this phenomena, there is a need for a mathematical model that properly predicts these injuries and their dependence on appropriate exposure parameters. This paper describes the use of a finite difference model of laser thermal injury in the cornea and retina. The model was originally developed for use with shorter wavelength laser irradiation, and as such, requires estimation of several key parameters used in the computations. The predictions from the model are compared to the experimental data, and conclusions are drawn regarding the ability of the model to properly follow the published observations at this wavelength.
Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.
Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe
2011-10-01
The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Das, Rudra Narayan; Roy, Kunal
2014-06-01
Hazardous potential of ionic liquids is becoming an issue of high concern with increasing application of these compounds in various industrial processes. Predictive toxicological modeling on ionic liquids provides a rational assessment strategy and aids in developing suitable guidance for designing novel analogues. The present study attempts to explore the chemical features of ionic liquids responsible for their ecotoxicity towards the green algae Scenedesmus vacuolatus by developing mathematical models using extended topochemical atom (ETA) indices along with other categories of chemical descriptors. The entire study has been conducted with reference to the OECD guidelines for QSAR model development using predictive classification and regression modeling strategies. The best models from both the analyses showed that ecotoxicity of ionic liquids can be decreased by reducing chain length of cationic substituents and increasing hydrogen bond donor feature in cations, and replacing bulky unsaturated anions with simple saturated moiety having less lipophilic heteroatoms. Copyright © 2013 Elsevier Ltd. All rights reserved.
NBC Hazard Prediction Model Capability Analysis
1999-09-01
tactical units surveyed, only the 82nd Airborne Division indicated any real experience with either model. The tactical units surveyed did use some form...Tracer Experiment (1987) and ETEX =European Tracer Experiment (1994). 22 These data sets include Phase I Dugway data, the Prairie Grass data set...I 8 hr) HPAC Different scales shown swru. Doll ~ (1.ean) Tolll GD 111:3-Stp-88 2J:OOL (I.DOin) ... 1 .... .... ... ..... ,l
Assessment of the Effects of Entrainment and Wind Shear on Nuclear Cloud Rise Modeling
NASA Astrophysics Data System (ADS)
Zalewski, Daniel; Jodoin, Vincent
2001-04-01
Accurate modeling of nuclear cloud rise is critical in hazard prediction following a nuclear detonation. This thesis recommends improvements to the model currently used by DOD. It considers a single-term versus a three-term entrainment equation, the value of the entrainment and eddy viscous drag parameters, as well as the effect of wind shear in the cloud rise following a nuclear detonation. It examines departures from the 1979 version of the Department of Defense Land Fallout Interpretive Code (DELFIC) with the current code used in the Hazard Prediction and Assessment Capability (HPAC) code version 3.2. The recommendation for a single-term entrainment equation, with constant value parameters, without wind shear corrections, and without cloud oscillations is based on both a statistical analysis using 67 U.S. nuclear atmospheric test shots and the physical representation of the modeling. The statistical analysis optimized the parameter values of interest for four cases: the three-term entrainment equation with wind shear and without wind shear as well as the single-term entrainment equation with and without wind shear. The thesis then examines the effect of cloud oscillations as a significant departure in the code. Modifications to user input atmospheric tables are identified as a potential problem in the calculation of stabilized cloud dimensions in HPAC.
Mathematical modeling of heavy metals contamination from MSW landfill site in Khon Kaen, Thailand.
Tantemsapya, N; Naksakul, Y; Wirojanagud, W
2011-01-01
Kham Bon landfill site is one of many municipality waste disposal sites in Thailand which are in an unsanitary condition. The site has been receiving municipality wastes without separating hazardous waste since 1968. Heavy metals including, Pb, Cr and Cd are found in soil and groundwater around the site, posing a health risk to people living nearby. In this research, contamination transport modelling of Pb, Cr and Cd was simulated using MODFLOW for two periods, at the present (2010) and 20 years prediction (2030). Model results showed that heavy metals, especially Pb and Cr migrated toward the north-eastern and south-eastern direction. The 20 years prediction showed that, heavy metals tend to move from the top soil to the deeper aquifer. The migration would not exceed 500 m radius from the landfill centre in the next 20 years, which is considered to be a slow process. From the simulation model, it is recommended that a mitigation measure should be performed to reduce the risk from landfill contamination. Hazardous waste should be separated for proper management. Groundwater contamination in the aquifer should be closely monitored. Consumption of groundwater in a 500 m radius must be avoided. In addition, rehabilitation of the landfill site should be undertaken to prevent further mobilization of pollutants.
Creating a Coastal National Elevation Database (CoNED) for science and conservation applications
Thatcher, Cindy A.; Brock, John C.; Danielson, Jeffrey J.; Poppenga, Sandra K.; Gesch, Dean B.; Palaseanu-Lovejoy, Monica; Barras, John; Evans, Gayla A.; Gibbs, Ann
2016-01-01
The U.S. Geological Survey is creating the Coastal National Elevation Database, an expanding set of topobathymetric elevation models that extend seamlessly across coastal regions of high societal or ecological significance in the United States that are undergoing rapid change or are threatened by inundation hazards. Topobathymetric elevation models are raster datasets useful for inundation prediction and other earth science applications, such as the development of sediment-transport and storm surge models. These topobathymetric elevation models are being constructed by the broad regional assimilation of numerous topographic and bathymetric datasets, and are intended to fulfill the pressing needs of decision makers establishing policies for hazard mitigation and emergency preparedness, coastal managers tasked with coastal planning compatible with predictions of inundation due to sea-level rise, and scientists investigating processes of coastal geomorphic change. A key priority of this coastal elevation mapping effort is to foster collaborative lidar acquisitions that meet the standards of the USGS National Geospatial Program's 3D Elevation Program, a nationwide initiative to systematically collect high-quality elevation data. The focus regions are located in highly dynamic environments, for example in areas subject to shoreline change, rapid wetland loss, hurricane impacts such as overwash and wave scouring, and/or human-induced changes to coastal topography.
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Bull, D. L.; Jones, C.; Roberts, J.; Thomas, M. A.
2016-12-01
Arctic coastlines are receding at accelerated rates, putting existing and future activities in the developing coastal Arctic environment at extreme risk. For example, at Oliktok Long Range Radar Site, erosion that was not expected until 2040 was reached as of 2014 (Alaska Public Media). As the Arctic Ocean becomes increasingly ice-free, rates of coastal erosion will likely continue to increase as (a) increased ice-free waters generate larger waves, (b) sea levels rise, and (c) coastal permafrost soils warm and lose strength/cohesion. Due to the complex and rapidly varying nature of the Arctic region, little is known about the increasing waves, changing circulation, permafrost soil degradation, and the response of the coastline to changes in these combined conditions. However, as scientific focus has been shifting towards the polar regions, Arctic science is rapidly advancing, increasing our understanding of complex Arctic processes. Our present understanding allows us to begin to develop and evaluate the coupled models necessary for the prediction of coastal erosion in support of Arctic risk assessments. What are the best steps towards the development of a coupled model for Arctic coastal erosion? This work focuses on our current understanding of Arctic conditions and identifying the tools and methods required to develop an integrated framework capable of accurately predicting Arctic coastline erosion and assessing coastal risk and hazards. We will present a summary of the state-of-the-science, and identify existing tools and methods required to develop an integrated diagnostic and monitoring framework capable of accurately predicting and assessing Arctic coastline erosion, infrastructure risk, and coastal hazards. The summary will describe the key coastal processes to simulate, appropriate models to use, effective methods to couple existing models, and identify gaps in knowledge that require further attention to make progress in our understanding of Arctic coastal erosion. * Co-authors listed in alphabetical order. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Predictive validity of the AUDIT for hazardous alcohol consumption in recently released prisoners.
Thomas, Emma; Degenhardt, Louisa; Alati, Rosa; Kinner, Stuart
2014-01-01
This study aimed to assess the predictive validity of the Alcohol Use Disorders Identification Test (AUDIT) among adult prisoners with respect to hazardous drinking following release, and identify predictors of post-release hazardous drinking among prisoners screening positive for risk of alcohol-related harm on the AUDIT. Data came from a survey-based longitudinal study of 1325 sentenced adult prisoners in Queensland, Australia. Baseline interviews were conducted pre-release with follow-up at 3 and 6 months post-release. We calculated sensitivity, specificity and area under the receiver operating characteristic (AUROC) to quantify the predictive validity of the AUDIT administered at baseline with respect to post-release hazardous drinking. Other potential predictors of hazardous drinking were measured by self-report and their association with the outcome was examined using logistic regression. At a cut-point of 8 or above, sensitivity of the AUDIT with respect to hazardous drinking at 3-month follow-up was 81.0% (95%CI: 77.9-84.6%) and specificity was 65.6% (95%CI: 60.6-70.3%). The AUROC was 0.78 (95%CI: 0.75-0.81), indicating moderate accuracy. Among those scoring 8 or above, high expectations to drink post-release (AOR: 2.49; 95%CI: 1.57-3.94) and past amphetamine-type stimulant (ATS) use (AOR: 1.64; 95%CI: 1.06-2.56) were significantly associated with hazardous drinking at 3 months post-release. Results were similar at 6 months. Among adult prisoners in our sample, pre-release AUDIT scores predicted hazardous drinking six months after release with acceptable accuracy, sensitivity and specificity. Among prisoners screening positive on the AUDIT, expectations of post-release drinking and ATS use are potential targets for intervention to reduce future hazardous drinking. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The New Italian Seismic Hazard Model
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.
2017-12-01
In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.
Science should warn people of looming disaster
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir
2014-05-01
Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of special knowledge, education, and communication. In fact, it appears that a few seismic hazard assessment programs and/or methodologies were tested appropriately against real observations before being endorsed for estimation of earthquake related risks. The fatal evidence and aftermath of the past decades prove that many of the existing internationally accepted methodologies are grossly misleading and are evidently unacceptable for any kind of responsible risk evaluation and knowledgeable disaster prevention. In contrast, the confirmed reliability of pattern recognition aimed at earthquake prone areas and times of increased probability, along with realistic earthquake scaling and scenario modeling, allow us to conclude that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering this state-of-the-art knowledge of looming disaster in advance catastrophic events. In a lieu of seismic observations long enough for a reliable probabilistic assessment or a comprehensive physical theory of earthquake recurrence, pattern recognition applied to available geophysical and/or geological data sets remains a broad avenue to follow in seismic hazard forecast/prediction. Moreover, better understanding seismic process in terms of non-linear dynamics of a hierarchical system of blocks-and-faults and deterministic chaos, progress to new approaches in assessing time-dependent seismic hazard based on multiscale analysis of seismic activity and reproducible intermediate-term earthquake prediction technique. The algorithms, which make use of multidisciplinary data available and account for fractal nature of earthquake distributions in space and time, have confirmed their reliability by durable statistical testing in the on-going regular real-time application lasted for more than 20 years. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve in forecast/prediction products to optimistic challenging views on Hazard Predictability in space and time, so that not to repeat missed opportunities for disaster preparedness like it happen in advance the 2009 L'Aquila, M6.3 earthquake in Italy and the 2011, M9.0 mega-thrust off the Pacific coast of Tōhoku region in Japan.
Early identification systems for emerging foodborne hazards.
Marvin, H J P; Kleter, G A; Prandini, A; Dekkers, S; Bolton, D J
2009-05-01
This paper provides a non-exhausting overview of early warning systems for emerging foodborne hazards that are operating in the various places in the world. Special attention is given to endpoint-focussed early warning systems (i.e. ECDC, ISIS and GPHIN) and hazard-focussed early warning systems (i.e. FVO, RASFF and OIE) and their merit to successfully identify a food safety problem in an early stage is discussed. Besides these early warning systems which are based on monitoring of either disease symptoms or hazards, also early warning systems and/or activities that intend to predict the occurrence of a food safety hazard in its very beginning of development or before that are described. Examples are trend analysis, horizon scanning, early warning systems for mycotoxins in maize and/or wheat and information exchange networks (e.g. OIE and GIEWS). Furthermore, recent initiatives that aim to develop predictive early warning systems based on the holistic principle are discussed. The assumption of the researchers applying this principle is that developments outside the food production chain that are either directly or indirectly related to the development of a particular food safety hazard may also provide valuable information to predict the development of this hazard.
Weather Safety - NOAA's National Weather Service
Statistical Models... MOS Prod GFS-LAMP Prod Climate Past Weather Predictions Weather Safety Weather Radio National Weather Service on FaceBook NWS on Facebook NWS Director Home > Safety Weather Safety This page weather safety. StormReady NOAA Weather Radio Emergency Managers Information Network U.S. Hazard Assmt
The need for sustained and integrated high-resolution mapping of dynamic coastal environments
Stockdon, Hilary F.; Lillycrop, Jeff W.; Howd, Peter A.; Wozencraft, Jennifer M.
2007-01-01
The evolution of the United States' coastal zone response to both human activities and natural processes is dynamic. Coastal resource and population protection requires understanding, in detail, the processes needed for change as well as the physical setting. Sustained coastal area mapping allows change to be documented and baseline conditions to be established, as well as future behavior to be predicted in conjunction with physical process models. Hyperspectral imagers and airborne lidars, as well as other recent mapping technology advances, allow rapid national scale land use information and high-resolution elevation data collection. Coastal hazard risk evaluation has critical dependence on these rich data sets. A fundamental storm surge model parameter in predicting flooding location, for example, is coastal elevation data, and a foundation in identifying the most vulnerable populations and resources is land use maps. A wealth of information for physical change process study, coastal resource and community management and protection, and coastal area hazard vulnerability determination, is available in a comprehensive national coastal mapping plan designed to take advantage of recent mapping technology progress and data distribution, management, and collection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce J. Mincher; Stephen P. Mezyk; William J. Cooper
2010-01-01
Halonitromethanes (HNMs) are byproducts formed through ozonation and chlorine/ chloramine disinfection processes in drinking waters that contain dissolved organic matter and bromide ions. These species occur at low concentration, but have been determined to have high cytotoxicity and mutagenicity and therefore may represent a human health hazard. In this study, we have investigated the chemistry involved in the mineralization of HNMs to non-hazardous inorganic products through the application of advanced oxidation and reduction processes. We have combined measured absolute reaction rate constants for the reactions of chloronitromethane, bromonitromethane and dichloronitromethane with the hydroxyl radical and the hydrated electron with amore » kinetic computer model in an attempt to elucidate the reaction pathways of these HNMs. The results are compared to measurements of stable products resulting from steady-state 60Co y-irradiations of the same compounds. The model predicted the decomposition of the parent compounds and ingrowth of chloride and bromide ions with excellent accuracy, but the prediction of the total nitrate ion concentration was slightly in error, reflecting the complexity of nitrogen oxide species reactions in irradiated solution.« less
The issues of current rainfall estimation techniques in mountain natural multi-hazard investigation
NASA Astrophysics Data System (ADS)
Zhuo, Lu; Han, Dawei; Chen, Ningsheng; Wang, Tao
2017-04-01
Mountain hazards (e.g., landslides, debris flows, and floods) induced by rainfall are complex phenomena that require good knowledge of rainfall representation at different spatiotemporal scales. This study reveals rainfall estimation from gauges is rather unrepresentative over a large spatial area in mountain regions. As a result, the conventional practice of adopting the triggering threshold for hazard early warning purposes is insufficient. The main reason is because of the huge orographic influence on rainfall distribution. Modern rainfall estimation methods such as numerical weather prediction modelling and remote sensing utilising radar from the space or on land are able to provide spatially more representative rainfall information in mountain areas. But unlike rain gauges, they only indirectly provide rainfall measurements. Remote sensing suffers from many sources of errors such as weather conditions, attenuation and sampling methods, while numerical weather prediction models suffer from spatiotemporal and amplitude errors depending on the model physics, dynamics, and model configuration. A case study based on Sichuan, China is used to illustrate the significant difference among the three aforementioned rainfall estimation methods. We argue none of those methods can be relied on individually, and the challenge is on how to make the full utilisation of the three methods conjunctively because each of them only provides partial information. We propose that a data fusion approach should be adopted based on the Bayesian inference method. However such an approach requires the uncertainty information from all those estimation techniques which still need extensive research. We hope this study will raise the awareness of this important issue and highlight the knowledge gap that should be filled in so that such a challenging problem could be tackled collectively by the community.
Depmann, Martine; Broer, Simone L; van der Schouw, Yvonne T; Tehrani, Fahimeh R; Eijkemans, Marinus J; Mol, Ben W; Broekmans, Frank J
2016-02-01
This review aimed to appraise data on prediction of age at natural menopause (ANM) based on antimüllerian hormone (AMH), antral follicle count (AFC), and mother's ANM to evaluate clinical usefulness and to identify directions for further research. We conducted three systematic reviews of the literature to identify studies of menopause prediction based on AMH, AFC, or mother's ANM, corrected for baseline age. Six studies selected in the search for AMH all consistently demonstrated AMH as being capable of predicting ANM (hazard ratio, 5.6-9.2). The sole study reporting on mother's ANM indicated that AMH was capable of predicting ANM (hazard ratio, 9.1-9.3). Two studies provided analyses of AFC and yielded conflicting results, making this marker less strong. AMH is currently the most promising marker for ANM prediction. The predictive capacity of mother's ANM demonstrated in a single study makes this marker a promising contributor to AMH for menopause prediction. Models, however, do not predict the extremes of menopause age very well and have wide prediction interval. These markers clearly need improvement before they can be used for individual prediction of menopause in the clinical setting. Moreover, potential limitations for such use include variations in AMH assays used and a lack of correction for factors or diseases affecting AMH levels or ANM. Future studies should include women of a broad age range (irrespective of cycle regularity) and should base predictions on repeated AMH measurements. Furthermore, currently unknown candidate predictors need to be identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelin, Timothy J; Ho, Clifford K.; Horstman, Luke
This paper presents a study of alternative heliostat standby aiming strategies and their impact on avian flux hazards and operational performance of a concentrating solar power plant. A mathematical model was developed that predicts the bird-feather temperature as a function of solar irradiance, thermal emittance, convection, and thermal properties of the feather. The irradiance distribution in the airspace above the Ivanpah Unit 2 heliostat field was simulated using a ray-trace model for two different times of the day, four days of the year, and nine different standby aiming strategies. The impact of the alternative aiming strategies on operational performance wasmore » assessed by comparing the heliostat slew times from standby position to the receiver for the different aiming strategies. Increased slew times increased a proxy start-up time that reduced the simulated annual energy production. Results showed that spreading the radial aim points around the receiver to a distance of ~150 m or greater reduced the hazardous exposure times that the feather temperature exceeded the hazard metric of 160 degrees C. The hazardous exposure times were reduced by ~23% and 90% at a radial spread of aim points extending to 150 m and 250 m, respectively, but the simulated annual energy production decreased as a result of increased slew times. Single point-focus aiming strategies were also evaluated, but these strategies increased the exposure hazard relative to other aiming strategies.« less
NASA Astrophysics Data System (ADS)
Bezawada, Rajesh; Uijt de Haag, Maarten
2010-04-01
This paper discusses the results of an initial evaluation study of hazard and integrity monitor functions for use with integrated alerting and notification. The Hazard and Integrity Monitor (HIM) (i) allocates information sources within the Integrated Intelligent Flight Deck (IIFD) to required functionality (like conflict detection and avoidance) and determines required performance of these information sources as part of that function; (ii) monitors or evaluates the required performance of the individual information sources and performs consistency checks among various information sources; (iii) integrates the information to establish tracks of potential hazards that can be used for the conflict probes or conflict prediction for various time horizons including the 10, 5, 3, and <3 minutes used in our scenario; (iv) detects and assesses the class of the hazard and provide possible resolutions. The HIM monitors the operation-dependent performance parameters related to the potential hazards in a manner similar to the Required Navigation Performance (RNP). Various HIM concepts have been implemented and evaluated using a previously developed sensor simulator/synthesizer. Within the simulation framework, various inputs to the IIFD and its subsystems are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. The framework and HIM functions are implemented in SimulinkR, a modeling language developed by The MathworksTM. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft.
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
NASA Astrophysics Data System (ADS)
Tsereteli, N. S.; Akkar, S.; Askan, A.; Varazanashvili, O.; Adamia, S.; Chkhitunidze, M.
2012-12-01
The country of Georgia is located between Russia and Turkey. The main morphological units of Georgia are the mountain ranges of the Greater and Lesser Caucasus separated by the Black Sea-Rioni and Kura (Mtkvari)-South Caspian intermountain troughs. Recent geodynamics of Georgia and adjacent territories of the Black Sea-Caspian Sea region, as a whole, are determined by its position between the still-converging Eurasian and Africa-Arabian plates. That caused moderate seismicity in the region. However, the risk resulting from these earthquakes is considerably high, as recent events during the last two decades have shown. Seismic hazard and risk assessment is a major research topic in various recent international and national projects. Despite the current efforts, estimation of regional seismic hazard assessment remains as a major problem. Georgia is one of the partners of ongoing regional project EMME (Earthquake Model for Middle East region). The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment PSHA. In this study, we present the preliminary results of PSHA for Georgia in this project attempting to improve gaps especially in such steps as: determination of seismic sources; selection or derivation of ground motion prediction equations models; estimation of maximum magnitude Mmax. Seismic sources (SS) were obtained on the bases of structural geology, parameters of seismicity and seismotectonics. Finely new SS have been developed for Georgia and adjacent region. Each zone was defined with the following parameters: the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes. As the ground motion dataset is absolutely insufficient by itself to derive a ground motion prediction model for Georgia, two approaches were taken in defining ground motions. First the modern procedure for selecting and ranking candidate ground-motion prediction equations (GMPEs) were done (Scherbaum et al. 2004, 2009; Cotton et al. 2006, Kale and Akkar, 2012) under a given ground motion dataset. Second the hybrid-empirical method proposed by Campbell (2003) was used. In the host-to-target simulations, Turkey and Iran was used as the host regions and Georgia as the target region. GMPEs for the Racha and Javakhety regions in Georgia are derived by scaling the pre-determined GMPEs based on the computed scaling coefficients. Finally PSH maps were calculated showing peak ground acceleration and spectral accelerations at 0, 0.2, 1, 2, 4 sec for Georgia.
Local soil effects on the Ground Motion Prediction model for the Racha region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, N.; Shengelia, I.; Otinashvili, M.; Tvaliashvili, A.
2016-12-01
The Caucasus is a region of numerous natural hazards and ensuing disasters. Analysis of the losses due to past disasters indicates those most catastrophic in the region have historically been due to strong earthquakes. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is the peak ground acceleration because this parameter gives useful information for Seismic Hazard Assessment that was selected for the analysis. One of the most important topics that have a significant influence on earthquake records is the site ground conditions that are the main issue of the study because the same earthquake recorded at the same distance may cause different damage according to ground conditions. In the study earthquake records were selected for the Racha region in Georgia which has the highest seismic activity in the region. Next, new GMP models are obtained based on new digital data recorded in the same area. After removing the site effect the earthquake records on the rock site were obtained. Thus, two GMP models were obtained: one for the ground surface and the other for the rock site. At the end, comparison was done for the both models in order to analyze the influence of the local soil conditions on the GMP model.
NASA Astrophysics Data System (ADS)
Cuttler, S. W.; Love, J. J.; Swidinsky, A.
2017-12-01
Geomagnetic field data obtained through the INTERMAGNET program are convolved with four validated EarthScope USArray impedances to estimate the geoelectric variations throughout the duration of a geomagnetic storm. A four day long geomagnetic storm began on June 22, 2016, and was recorded at the Brandon (BRD), Manitoba and Fredericksburg (FRD), Virginia magnetic observatories over four days. Two impedance tensors corresponding to each magnetic observatory produce extremely different responses, despite being within close geographical proximity. Estimated time series of the geoelectric field throughout the duration of the geomagnetic storm were calculated, providing an understanding of how the geoelectric field differs across small geographic distances within the same geomagnetic hazard zones derived from prior geomagnetic hazard assessment. We show that the geoelectric response of two sites within 200km of one another can differ by up to two orders of magnitude (4245 mV/km at one location and 38 mV/km at another location 125km away). In addition, we compare these results with estimations of the geoelectric field generated from synthetic 1-dimensional resistivity models commonly used to represent large geographic regions when assessing geomagnetically induced current (GIC) hazards. This comparison shows that estimations of the geomagnetic field from these models differ greatly from estimations produced from Earthscope USArray sites (1205 mV/km in the 1D and 4245 mV/km in the 3D case in one example). This study demonstrates that the application of uniform 1-dimensional resistivity models of the subsurface to wide geographic regions is insufficient to predict the geoelectric hazard at a given location. Furthermore an evaluation of the 3-dimensional resistivity distribution at a given location is necessary to produce a reliable estimation of how the geoelectric field evolves over the course of a geomagnetic storm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depeursinge, Adrien, E-mail: adrien.depeursinge@hevs.ch; Yanagawa, Masahiro; Leung, Ann N.
Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, themore » proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10{sup −5}). Conclusions: This study constitutes a novel perspective on how to interpret imaging information from CT examinations by suggesting that most of the information related to adenocarcinoma aggressiveness is related to the intensity and morphological properties of solid components of the tumor. The prediction of adenocarcinoma relapse was found to have low specificity but very high sensitivity. Our results could be useful in clinical practice to identify patients for which no recurrence is expected with a very high confidence using a presurgical CT scan only. It also provided an accurate estimation of the risk of recurrence after a given duration t from surgical resection (i.e., C-index = 0.81 ± 0.02)« less
EPA is developing methods for utilizing computational chemistry, high-throughput screening (HTS) and various toxicogenomic technologies to predict potential for toxicity and prioritize limited testing resources towards chemicals that likely represent the greatest hazard to human ...
Set-up and validation of a Delft-FEWS based coastal hazard forecasting system
NASA Astrophysics Data System (ADS)
Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya
2017-04-01
European coasts are increasingly threatened by hazards related to low-probability and high-impact hydro-meteorological events. Uncertainties in hazard prediction and capabilities to cope with their impact lie in both future storm pattern and increasing coastal development. Therefore, adaptation to future conditions requires a re-evaluation of coastal disaster risk reduction (DRR) strategies and introduction of a more efficient mix of prevention, mitigation and preparedness measures. The latter presumes that development of tools, which can manage the complex process of merging data and models and generate products on the current and expected hydro-and morpho-dynamic states of the coasts, such as forecasting system of flooding and erosion hazards at vulnerable coastal locations (hotspots), is of vital importance. Output of such system can be of an utmost value for coastal stakeholders and the entire coastal community. In response to these challenges, Delft-FEWS provides a state-of-the-art framework for implementation of such system with vast capabilities to trigger the early warning process. In addition, this framework is highly customizable to the specific requirements of any individual coastal hotspot. Since its release many Delft-FEWS based forecasting system related to inland flooding have been developed. However, limited number of coastal applications was implemented. In this paper, a set-up of Delft-FEWS based forecasting system for Varna Bay (Bulgaria) and a coastal hotspot, which includes a sandy beach and port infrastructure, is presented. It is implemented in the frame of RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). The system output generated in hindcast mode is validated with available observations of surge levels, wave and morphodynamic parameters for a sequence of three short-duration and relatively weak storm events occurred during February 4-12, 2015. Generally, the models' performance is considered as very good and results obtained - quite promising for reliable prediction of both boundary conditions and coastal hazard and gives a good basis for estimation of onshore impact.
Experimental demonstration of a two-phase population extinction hazard
Drake, John M.; Shapiro, Jeff; Griffen, Blaine D.
2011-01-01
Population extinction is a fundamental biological process with applications to ecology, epidemiology, immunology, conservation biology and genetics. Although a monotonic relationship between initial population size and mean extinction time is predicted by virtually all theoretical models, attempts at empirical demonstration have been equivocal. We suggest that this anomaly is best explained with reference to the transient properties of ensembles of populations. Specifically, we submit that under experimental conditions, many populations escape their initially vulnerable state to reach quasi-stationarity, where effects of initial conditions are erased. Thus, extinction of populations initialized far from quasi-stationarity may be exposed to a two-phase extinction hazard. An empirical prediction of this theory is that the fit Cox proportional hazards regression model for the observed survival time distribution of a group of populations will be shown to violate the proportional hazards assumption early in the experiment, but not at later times. We report results of two experiments with the cladoceran zooplankton Daphnia magna designed to exhibit this phenomenon. In one experiment, habitat size was also varied. Statistical analysis showed that in one of these experiments a transformation occurred so that very early in the experiment there existed a transient phase during which the extinction hazard was primarily owing to the initial population size, and that this was gradually replaced by a more stable quasi-stationary phase. In the second experiment, only habitat size unambiguously displayed an effect. Analysis of data pooled from both experiments suggests that the overall extinction time distribution in this system results from the mixture of extinctions during the initial rapid phase, during which the effects of initial population size can be considerable, and a longer quasi-stationary phase, during which only habitat size has an effect. These are the first results, to our knowledge, of a two-phase population extinction process. PMID:21429907
Bubble generation during transformer overload
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oommen, T.V.
1990-03-01
Bubble generation in transformers has been demonstrated under certain overload conditions. The release of large quantities of bubbles would pose a dielectric breakdown hazard. A bubble prediction model developed under EPRI Project 1289-4 attempts to predict the bubble evolution temperature under different overload conditions. This report details a verification study undertaken to confirm the validity of the above model using coil structures subjected to overload conditions. The test variables included moisture in paper insulation, gas content in oil, and the type of oil preservation system. Two aged coils were also tested. The results indicated that the observed bubble temperatures weremore » close to the predicted temperatures for models with low initial gas content in the oil. The predicted temperatures were significantly lower than the observed temperatures for models with high gas content. Some explanations are provided for the anomalous behavior at high gas levels in oil. It is suggested that the dissolved gas content is not a significant factor in bubble evolution. The dominant factor in bubble evolution appears to be the water vapor pressure which must reach critical levels before bubbles can be released. Further study is needed to make a meaningful revision of the bubble prediction model. 8 refs., 13 figs., 11 tabs.« less
Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator
Moses, Matthew S.; Murphy, Ryan J.; Kutzer, Michael D. M.; Armand, Mehran
2016-01-01
This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy. PMID:27818607
Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator.
Moses, Matthew S; Murphy, Ryan J; Kutzer, Michael D M; Armand, Mehran
2015-12-01
This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy.
NASA Astrophysics Data System (ADS)
Iverson, R. M.
2015-12-01
Episodic landslides and debris flows play a key role in sculpting many steep landscapes, and they also pose significant natural hazards. Field evidence, laboratory experiments, and theoretical analyses show that variations in the quantity, speed, and distance of sediment transport by landslides and debris flows can depend strongly on nuanced differences in initial conditions. Moreover, initial conditions themselves can be strongly dependent on the geological legacy of prior events. The scope of these dependencies is revealed by the results of landslide dynamics experiments [Iverson et al., Science, 2000], debris-flow erosion experiments [Iverson et al., Nature Geosci., 2011], and numerical simulations of the highly destructive 2014 Oso, Washington, landslide [Iverson et al., Earth Planet. Sci. Let., 2015]. In each of these cases, feedbacks between basal sediment deformation and pore-pressure generation cause the speed and distance of sediment transport to be very sensitive to subtle differences in the ambient sediment porosity and water content. On the other hand, the onset of most landslides and debris flows depends largely on pore-water pressure distributions and only indirectly on sediment porosity and water content. Thus, even if perfect predictions of the locations and timing of landslides and debris flows were available, the dynamics of the events - and their consequent hazards and sediment transport - would be difficult to predict. This difficulty is a manifestation of the nonlinear physics involved, rather than of poor understanding of those physics. Consequently, physically based models for assessing the hazards and sediment transport due to landslides and debris flows must take into account both evolving nonlinear dynamics and inherent uncertainties about initial conditions. By contrast, landscape evolution models that use prescribed algebraic formulas to represent sediment transport by landslides and debris flows lack a sound physical basis.
NASA Astrophysics Data System (ADS)
Arnaud, G.; Krien, Y.; Zahibo, N.; Dudon, B.
2017-12-01
Coastal hazards are among the most worrying threats of our time. In a context of climate change coupled to a large population increase, tropical areas could be the most exposed zones of the globe. In such circumstances, understanding the underlying processes can help to better predict storm surges and the associated global risks.Here we present the partial preliminary results integrated in a multidisciplinary project focused on climatic change effects over the coastal threat in the French West Indies and funded by the European Regional Development Fund. The study aims to provide a coastal hazard assessment based on hurricane surge and tsunami modeling including several aspects of climate changes that can affect hazards such as sea level rise, crustal subsidence/uplift, coastline changes etc. Several tsunamis scenarios have been simulated including tele-tsunamis to ensure a large range of tsunami hazards. Surge level of hurricane have been calculated using a large number of synthetic hurricanes to cover the actual and forecasted climate over the tropical area of Atlantic ocean. This hazard assessment will be later coupled with stakes assessed over the territory to provide risk maps.
Wu, Shanshan; Kong, Yuanyuan; Piao, Hongxin; Jiang, Wei; Xie, Wen; Chen, Yongpeng; Lu, Lungen; Ma, Anlin; Xie, Shibin; Ding, Huiguo; Shang, Jia; Zhang, Xuqing; Feng, Bo; Han, Tao; Xu, Xiaoyuan; Huo, Lijuan; Cheng, Jilin; Li, Hai; Wu, Xiaoning; Zhou, Jialing; Sun, Yameng; Ou, Xiaojuan; Zhang, Hui; You, Hong; Jia, Jidong
2018-06-01
It is unclear whether liver stiffness measurement (LSM) dynamic changes after anti-HBV treatment could predict the risk of liver-related events (LREs), particularly in patients with HBV-related compensated cirrhosis. Treatment-naïve patients with HBV-related compensated cirrhosis were enrolled. All patients were under entecavir-based antiviral therapy, and followed up every 26 weeks for 2 years. The association between LSM and LREs was analysed by Cox proportional hazard model and Harrell C-index analysis. A total of 438 patients were included in the study. At the follow-up of 104 weeks, LREs developed in 33/438 (7.8%) patients, including 16 episodes of decompensation, 18 HCC and 3 deaths. The median LSM remained high from 20.9, 18.6, 20.4 to 20.3 Kpa at week 0, 26, 52 and 78 among patients with LREs, whereas the LSM decreased from 17.8, 12.3, 10.6 to 10.2 Kpa in patients without LREs respectively. Percentage changes of LSM at 26 weeks from baseline were significantly associated with LREs (excluding 11 cases occurred within the first 26 weeks), with a crude hazard ratio of 2.94 (95% CI: 1.73-5.00) and an albumin-adjusted hazard ratio of 2.47 (95% CI: 1.49-4.11). The Harrell C-index of these 2 models for predicting 2-year LREs were 0.68 (95% CI: 0.56-0.80) and 0.75 (95% CI: 0.65-0.85) respectively. Nomograms were developed to identify individuals at high risk for point-of-care application. Dynamic changes of LSM alone, or combined with baseline albumin, could predict LREs in patients with HBV-related compensated cirrhosis during antiviral therapy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kallepalli, Akhil; Kakani, Nageswara Rao; James, David B.
2016-10-01
Coastal regions, especially river deltas are highly resourceful and hence densely populated; but these extremely low-lying lands are vulnerable to rising sea levels due to global warming threatening the life and property in these regions. Recent IPCC (2013) predictions of 26-82cm global sea level rise are now considered conservative as subsequent investigations such as by Met Office, UK indicated a vertical rise of about 190cm, which would displace 10% of the world's population living within 10 meters above the sea level. Therefore, predictive models showing the hazard line are necessary for efficient coastal zone management. Remote sensing and GIS technologies form the mainstay of such predictive models on coastal retreat and inundation to future sea-level rise. This study is an attempt to estimate the varying trends along the Krishna-Godavari (K-G) delta region. Detailed maps showing various coastal landforms in the K-G delta region were prepared using the IRS-P6 LISS 3 images. The rate of shoreline shift during a 31-year period along different sectors of the 330km long K-G delta coast was estimated using Landsat-2 and IRS-P6 LISS 3 images between 1977 and 2008. With reference to a selected baseline from along an inland position, End Point Rate (EPR), Shoreline Change Envelope (SCE) and Net Shoreline Movement (NSM) were calculated, using a GIS-based Digital Shoreline Analysis System (DSAS). The results showed that the shoreline migrated landward up to a maximum distance of 3.13km resulting in a net loss of about 42.10km2 area during this 31-year period. Further, considering the nature of landforms and EPR, the future hazard line is predicted for the area, which also indicated a net erosion of about 57.68km2 along the K-G delta coast by 2050 AD.
NASA Technical Reports Server (NTRS)
Schaffner, Philip R.; Daniels, Taumi S.; West, Leanne L.; Gimmestad, Gary G.; Lane, Sarah E.; Burdette, Edward M.; Smith, William L.; Kireev, Stanislav; Cornman, Larry; Sharman, Robert D.
2012-01-01
The Forward-Looking Interferometer (FLI) is an airborne sensor concept for detection and estimation of potential atmospheric hazards to aircraft. The FLI concept is based on high-resolution Infrared Fourier Transform Spectrometry technologies that have been developed for satellite remote sensing. The FLI is being evaluated for its potential to address multiple hazards, during all phases of flight, including clear air turbulence, volcanic ash, wake vortices, low slant range visibility, dry wind shear, and icing. In addition, the FLI is being evaluated for its potential to detect hazardous runway conditions during landing, such as wet or icy asphalt or concrete. The validation of model-based instrument and hazard simulation results is accomplished by comparing predicted performance against empirical data. In the mountain lee wave data collected in the previous FLI project, the data showed a damped, periodic mountain wave structure. The wave data itself will be of use in forecast and nowcast turbulence products such as the Graphical Turbulence Guidance and Graphical Turbulence Guidance Nowcast products. Determining how turbulence hazard estimates can be derived from FLI measurements will require further investigation.
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.
Lantz, Paula M.; Golberstein, Ezra; House, James S.; Morenoff, Jeffrey D.
2012-01-01
Many demographic, socioeconomic, and behavioral risk factors predict mortality in the United States. However, very few population-based longitudinal studies are able to investigate simultaneously the impact of a variety of social factors on mortality. We investigated the degree to which demographic characteristics, socioeconomic variables and major health risk factors were associated with mortality in a nationally-representative sample of 3,617 U.S. adults from 1986-2005, using data from the 4 waves of the Americans’ Changing Lives study. Cox proportional hazard models with time-varying covariates were employed to predict all-cause mortality verified through the National Death Index and death certificate review. The results revealed that low educational attainment was not associated with mortality when income and health risk behaviors were included in the model. The association of low-income with mortality remained after controlling for major behavioral risks. Compared to those in the “normal” weight category, neither overweight nor obesity was significantly associated with the risk of mortality. Among adults age 55 and older at baseline, the risk of mortality was actually reduced for those were overweight (hazard rate ratio=0.83, 95% C.I. = 0.71 – 0.98) and those who were obese (hazard rate ratio=0.68, 95% C.I. = 0.55 – 0.84), controlling for other health risk behaviors and health status. Having a low level of physical activity was a significant risk factor for mortality (hazard rate ratio=1.58, 95% C.I. = 1.20 – 2.07). The results from this national longitudinal study underscore the need for health policies and clinical interventions focusing on the social and behavioral determinants of health, with a particular focus on income security, smoking prevention/cessation, and physical activity. PMID:20226579
Skin autofluorescence and all-cause mortality in stage 3 CKD.
Fraser, Simon D S; Roderick, Paul J; McIntyre, Natasha J; Harris, Scott; McIntyre, Christopher W; Fluck, Richard J; Taal, Maarten W
2014-08-07
Novel markers may help to improve risk prediction in CKD. One potential candidate is tissue advanced glycation end product accumulation, a marker of cumulative metabolic stress, which can be assessed by a simple noninvasive measurement of skin autofluorescence. Skin autofluorescence correlates with higher risk of cardiovascular events and mortality in people with diabetes or people requiring RRT, but its role in earlier CKD has not been studied. A prospective cohort of 1741 people with CKD stage 3 was recruited from primary care between August 2008 and March 2010. Participants underwent medical history, clinical assessment, blood and urine sampling for biochemistry, and measurement of skin autofluorescence. Kaplan-Meier plots and multivariate Cox proportional hazards models were used to investigate associations between skin autofluorescence (categorical in quartiles) and all-cause mortality. In total, 1707 participants had skin autofluorescence measured; 170 (10%) participants died after a median of 3.6 years of follow-up. The most common cause of death was cardiovascular disease (41%). Higher skin autofluorescence was associated significantly with poorer survival (all-cause mortality, P<0.001) on Kaplan-Meier analysis. Univariate and age/sex-adjusted Cox proportional hazards models showed that the highest quartile of skin autofluorescence was associated with all-cause mortality (hazard ratio, 2.64; 95% confidence interval, 1.71 to 4.08; P<0.001 and hazard ratio, 1.84; 95% confidence interval, 1.18 to 2.86; P=0.003, respectively, compared with the lowest quartile). This association was not maintained after additional adjustment to include cardiovascular disease, diabetes, smoking, body mass index, eGFR, albuminuria, and hemoglobin. Skin autofluorescence was not independently associated with all-cause mortality in this study. Additional research is needed to clarify whether it has a role in risk prediction in CKD. Copyright © 2014 by the American Society of Nephrology.
NOAA-USGS Debris-Flow Warning System - Final Report
,
2005-01-01
Landslides and debris flows cause loss of life and millions of dollars in property damage annually in the United States (National Research Council, 2004). In an effort to reduce loss of life by debris flows, the National Oceanic and Atmospheric Administration's (NOAA) National Weather Service (NWS) and the U.S. Geological Survey (USGS) operated an experimental debris-flow prediction and warning system in the San Francisco Bay area from 1986 to 1995 that relied on forecasts and measurements of precipitation linked to empirical precipitation thresholds to predict the onset of rainfall-triggered debris flows. Since 1995, there have been substantial improvements in quantifying precipitation estimates and forecasts, development of better models for delineating landslide hazards, and advancements in geographic information technology that allow stronger spatial and temporal linkage between precipitation forecasts and hazard models. Unfortunately, there have also been several debris flows that have caused loss of life and property across the United States. Establishment of debris-flow warning systems in areas where linkages between rainfall amounts and debris-flow occurrence have been identified can help mitigate the hazards posed by these types of landslides. Development of a national warning system can help support the NOAA-USGS goal of issuing timely Warnings of potential debris flows to the affected populace and civil authorities on a broader scale. This document presents the findings and recommendations of a joint NOAA-USGS Task Force that assessed the current state-of-the-art in precipitation forecasting and debris-flow hazard-assessment techniques. This report includes an assessment of the science and resources needed to establish a demonstration debris-flow warning project in recently burned areas of southern California and the necessary scientific advancements and resources associated with expanding such a warning system to unburned areas and, possibly, to a national scope.
Fish acute toxicity syndromes and their use in the QSAR approach to hazard assessment.
McKim, J M; Bradbury, S P; Niemi, G J
1987-01-01
Implementation of the Toxic Substances Control Act of 1977 creates the need to reliably establish testing priorities because laboratory resources are limited and the number of industrial chemicals requiring evaluation is overwhelming. The use of quantitative structure activity relationship (QSAR) models as rapid and predictive screening tools to select more potentially hazardous chemicals for in-depth laboratory evaluation has been proposed. Further implementation and refinement of quantitative structure-toxicity relationships in aquatic toxicology and hazard assessment requires the development of a "mode-of-action" database. With such a database, a qualitative structure-activity relationship can be formulated to assign the proper mode of action, and respective QSAR, to a given chemical structure. In this review, the development of fish acute toxicity syndromes (FATS), which are toxic-response sets based on various behavioral and physiological-biochemical measurements, and their projected use in the mode-of-action database are outlined. Using behavioral parameters monitored in the fathead minnow during acute toxicity testing, FATS associated with acetylcholinesterase (AChE) inhibitors and narcotics could be reliably predicted. However, compounds classified as oxidative phosphorylation uncouplers or stimulants could not be resolved. Refinement of this approach by using respiratory-cardiovascular responses in the rainbow trout, enabled FATS associated with AChE inhibitors, convulsants, narcotics, respiratory blockers, respiratory membrane irritants, and uncouplers to be correctly predicted. PMID:3297660
Integration of expert knowledge and uncertainty in natural risk assessment
NASA Astrophysics Data System (ADS)
Baruffini, Mirko; Jaboyedoff, Michel
2010-05-01
Natural hazards occurring in alpine regions during the last decades have clearly shown that interruptions of the Swiss railway power supply and closures of the Gotthard highway due to those events have increased the awareness of infrastructure vulnerability also in Switzerland and illustrate the potential impacts of failures on the performance of infrastructure systems. This asks for a high level of surveillance and preservation along the transalpine lines. Traditional simulation models are only partially capable to predict complex systems behaviours and the subsequently designed and implemented protection strategies are not able to mitigate the full spectrum of risk consequences. They are costly, and maximal protection is most probably not economically feasible. In addition, the quantitative risk assessment approaches such as fault tree analysis, event tree analysis and equivalent annual fatality analysis rely heavily on statistical information. Collecting sufficient data to base a statistical probability of risk is costly and, in many situations, such data does not exist; thus, expert knowledge and experience or engineering judgment can be exploited to estimate risk qualitatively. In order to overcome the statistics lack we used models based on expert's knowledge in order to qualitatively predict based on linguistic appreciation that are more expressive and natural in risk assessment. Fuzzy reasoning (FR) can be used providing a mechanism of computing with words (Zadeh, 1965) for modelling qualitative human thought processes in analyzing complex systems and decisions. Uncertainty in predicting the risk levels arises from such situations because no fully-formalized knowledge are available. Another possibility is to use probability based on triangular probability density function (T-PDF) that can be used to follow the same flow-chart as FR. We implemented the Swiss natural hazard recommendations FR and probability using T-PDF in order to obtain hazard zoning and uncertainties. We followed the same approach for each term of risks i.e. hazard, vulnerability, element at risk, exposition. This risk approach can be achieved by a comprehensive use of several artificial intelligence (AI) technologies, which are done through, for example: (1) GIS techniques; (2) FR or T-PDF for qualitatively predicting risks for possible review results; and (3) A Multi-Criteria Evaluation for analyzing weak points. The main advantages of FR or T-PDF involve the ability to express not-fully-formalized knowledge, easy knowledge representation and acquisition, and self updatability. The results show that such an approach points out quite wide zone of uncertainty. REFERENCES Zadeh L.A. 1965 : Fuzzy Sets. Information and Control, 8:338-353.
Grewal, Jasmine; McKelvie, Robert S; Persson, Hans; Tait, Peter; Carlsson, Jonas; Swedberg, Karl; Ostergren, Jan; Lonn, Eva
2008-09-15
More than 40% of patients hospitalized with heart failure have preserved left ventricular ejection fraction (HF-PLVEF) and are at high risk for cardiovascular (CV) events. The purpose of this study was to determine the value of N-terminal pro-brain natriuretic peptide (NT-proBNP) and brain natriuretic peptide (BNP) in predicting CV outcomes in patients with HF-PLVEF. Participants with an ejection fraction >40% in the prospective CHARM Echocardiographic Substudy were included in this analysis. Plasma NT-proBNP levels were measured, and 2 cut-offs were selected prospectively at 300 pg/ml and 600 pg/ml. BNP cut-off was set at 100 pg/ml. Clinical characteristics were recorded, and systolic and diastolic function were evaluated by echocardiography. The primary substudy outcome was the composite of CV mortality, hospitalization for heart failure, and myocardial infarction or stroke. A total of 181 patients were included, and there were 17 primary CV events (9.4%) during a median follow-up time of 524 days. In a model including clinical characteristics, echocardiographic measures, and BNP or NT-proBNP, the composite CV event outcome was best predicted by NT-proBNP >300 pg/ml (hazard ratio 5.8, 95% confidence intervals [CI] 1.3 to 26.4, p = 0.02) and moderate or severe diastolic dysfunction on echocardiography. When NT-proBNP >600 pg/ml was used in the model, it was the sole independent predictor of primary CV events (hazard ratio 8.0, 95% CI 2.6 to 24.8, p = 0.0003) as was BNP >100 pg/ml (hazard ratio 3.1, 95% CI 1.2 to 8.2, p = 0.02) in the BNP model. In conclusion, both elevated NT-proBNP and BNP are strong independent predictors of clinical events in patients with HF-PLVEF.
Hsu, Chia-Yang; Liu, Po-Hong; Lee, Yun-Hsuan; Hsia, Cheng-Yuan; Huang, Yi-Hsiang; Lin, Han-Chieh; Chiou, Yi-You; Lee, Fa-Yauh; Huo, Teh-Ia
2015-01-01
Background and Aims The prognostic ability of α-fetoprotein (AFP) for patients with hepatocellular carcinoma (HCC) was examined by using different cutoff values. The optimal AFP cutoff level is still unclear. Methods A total of 2579 HCC patients were consecutively enrolled in Taiwan, where hepatitis B is the major etiology of chronic liver disease. Four frequently used AFP cutoff levels, 20, 200, 400, 1000 ng/mL, were investigated. One-to-one matched pairs between patients having AFP higher and lower than the cutoffs were selected by using the propensity model. The adjusted hazard ratios of survival difference were calculated with Cox proportional hazards model. Results Patients with a higher AFP level were associated with more severe cirrhosis, more frequent vascular invasion, higher tumor burden and poorer performance status (all p<0.0001). In the propensity model, 4 groups of paired patients were selected, and there was no difference found in the comparison of baseline characteristics (all p>0.05). Patients with AFP <20 ng/mL had significantly better long-term survival than patients with AFP ≧20 ng/mL (p<0.0001), and patients with AFP <400 ng/mL had significantly better overall outcome than patients with AFP ≧400 ng/mL (p = 0.0186). There was no difference of long-term survival between patients divided by AFP levels of 200 and 1000 ng/mL. The adjusted hazard ratios of AFP ≧20 ng/mL and AFP ≧400 ng/mL were 1.545 and 1.471 (95% confidence interval: 1.3–1.838 and 1.178–1.837), respectively. Conclusions This study shows the independently predictive ability of baseline serum AFP level in HCC patients. AFP levels of 20 and 400 ng/mL are considered feasible cutoffs to predict long-term outcome in unselected HCC patients. PMID:25738614
Boonen, Annelies; Boone, Caroline; Albert, Adelin; Mielants, Herman
2018-05-01
The aim was to determine changes over time in work outcomes and investigate the predictive value of baseline personal and work-related factors on the evolution of work outcomes among employed patients with AS initiating etanercept. Employment status, absenteeism and presenteeism were assessed using the Work Productivity and Activity Impairment for AS questionnaire in a 24-month open-label, observational study (NCT01421303). The potential effect of baseline factors on work outcomes was analysed using predictive modelling (Cox regression and linear mixed models). After 24 months, 11/75 (14.7%) patients had permanently withdrawn from employment (seven because of AS). Absenteeism and presenteeism decreased significantly within 6 months of etanercept treatment and remained stable thereafter. Predictive modelling indicated that male sex (hazard ratio = 0.18; 95% CI: 0.04, 0.85), (log) number of working hours per week (hazard ratio = 0.13; 95% CI: 0.03, 0.51) and the possibility of developing skills (hazard ratio = 0.42; 95% CI: 0.19, 0.91) positively influenced time in employment. Over time, lower absenteeism was significantly associated with the quality of contact with colleagues [coefficient (s.e.): -0.35 (0.10)] and importance of the job for quality of life [-0.49 (0.17)], and higher absenteeism with current smoking [1.66 (0.44)] and change in job because of illness [1.51 (0.66)]. Over time, lower presenteeism was associated with male sex [-14.5 (2.64)], the possibility of postponing work [-6.60 (2.73)], quality of contact with colleagues [-2.04 (0.96)] and >50 workers in the company [-7.65 (2.76)], and higher presenteeism with manual profession [8.41 (2.72)]. Contextual factors influence work outcomes over time and should not be ignored when aiming to improve work outcomes in patients with AS. ClinicalTrials.gov, http://clinicaltrials.gov, NCT01421303.
Liu, Xiang; Peng, Yingwei; Tu, Dongsheng; Liang, Hua
2012-10-30
Survival data with a sizable cure fraction are commonly encountered in cancer research. The semiparametric proportional hazards cure model has been recently used to analyze such data. As seen in the analysis of data from a breast cancer study, a variable selection approach is needed to identify important factors in predicting the cure status and risk of breast cancer recurrence. However, no specific variable selection method for the cure model is available. In this paper, we present a variable selection approach with penalized likelihood for the cure model. The estimation can be implemented easily by combining the computational methods for penalized logistic regression and the penalized Cox proportional hazards models with the expectation-maximization algorithm. We illustrate the proposed approach on data from a breast cancer study. We conducted Monte Carlo simulations to evaluate the performance of the proposed method. We used and compared different penalty functions in the simulation studies. Copyright © 2012 John Wiley & Sons, Ltd.
Validation of attenuation models for ground motion applications in central and eastern North America
Pasyanos, Michael E.
2015-11-01
Recently developed attenuation models are incorporated into standard one-dimensional (1-D) ground motion prediction equations (GMPEs), effectively making them two-dimensional (2-D) and eliminating the need to create different GMPEs for an increasing number of sub-regions. The model is tested against a data set of over 10,000 recordings from 81 earthquakes in North America. The use of attenuation models in GMPEs improves our ability to fit observed ground motions and should be incorporated into future national hazard maps. The improvement is most significant at higher frequencies and longer distances which have a greater number of wave cycles. This has implications for themore » rare high-magnitude earthquakes, which produce potentially damaging ground motions over wide areas, and drive the seismic hazards. Furthermore, the attenuation models can be created using weak ground motions, they could be developed for regions of low seismicity where empirical recordings of ground motions are uncommon and do not span the full range of magnitudes and distances.« less
Rios, Camilo; Motola-Kuba, Daniel; Matus-Santos, Juan; Villa, Antonio R; Moreno-Jimenez, Sergio
2016-01-01
Objective: A long-lasting concern has prevailed for the identification of predictive biomarkers for high-grade gliomas (HGGs) using MRI. However, a consensus of which imaging parameters assemble a significant survival model is still missing in the literature; we investigated the significant positive or negative contribution of several MR biomarkers in this tumour prognosis. Methods: A retrospective cohort of supratentorial HGGs [11 glioblastoma multiforme (GBM) and 17 anaplastic astrocytomas] included 28 patients (9 females and 19 males, respectively, with a mean age of 50.4 years, standard deviation: 16.28 years; range: 13–85 years). Oedema and viable tumour measurements were acquired using regions of interest in T1 weighted, T2 weighted, fluid-attenuated inversion recovery, apparent diffusion coefficient (ADC) and MR spectroscopy (MRS). We calculated Kaplan–Meier curves and obtained Cox's proportional hazards. Results: During the follow-up period (3–98 months), 17 deaths were recorded. The median survival time was 1.73 years (range, 0.287–8.947 years). Only 3 out of 20 covariates (choline-to-N-acetyl aspartate and lipids-lactate-to-creatine ratios and age) showed significance in explaining the variability in the survival hazards model; score test: χ2 (3) = 9.098, p = 0.028. Conclusion: MRS metabolites overcome volumetric parameters of peritumoral oedema and viable tumour, as well as tumour region ADC measurements. Specific MRS ratios (Cho/Naa, L-L/Cr) might be considered in a regular follow-up for these tumours. Advances in knowledge: Cho/Naa ratio is the strongest survival predictor with a log-hazard function of 2.672 in GBM. Low levels of lipids–lactate/Cr ratio represent up to a 41.6% reduction in the risk of death in GBM. PMID:27626830
NASA Astrophysics Data System (ADS)
Herrero, Andre; Spagnuolo, Elena; Akinci, Aybige; Pucci, Stefano
2016-04-01
In the present study we attempted to improve the seismic hazard assessment taking into account possible sources of epistemic uncertainty and the azimuthal variability of the ground motions which, at a particular site, is significantly influenced by the rupture mechanism and the rupture direction relative to the site. As a study area we selected Marmara Region (Turkey), especially the city of Istanbul which is characterized by one of the highest levels of seismic risk in Europe and the Mediterranean region. The seismic hazard in the city is mainly associated with two active fault segments which are located at about 20-30 km south of Istanbul. In this perspective first we proposed a methodology to incorporate this new information such as nucleation point in a probabilistic seismic hazard analysis (PSHA) framework. Secondly we introduced information about those fault segments by focusing on the fault rupture characteristics which affect the azimuthal variations of the ground motion spatial distribution i.e. source directivity effect and its influence on the probabilistic seismic hazard analyses (PSHA). An analytical model developed by Spudich and Chiou (2008) is used as a corrective factor that modifies the Next Generation Attenuation (NGA, Power et al. 2008) ground motion predictive equations (GMPEs) introducing rupture related parameters that generally lump together into the term directivity effect. We used the GMPEs as derived by the Abrahamson and Silva (2008) and the Boore and Atkinson (2008); our results are given in terms of 10% probability of exceedance of PSHA (at several periods from 0.5 s to 10 s) in 50 years on rock site condition; the correction for directivity introduces a significant contribution to the percentage ratio between the seismic hazards computed using the directivity model respect to the seismic hazard standard practice. In particular, we benefited the dynamic simulation from a previous study (Aochi & Utrich, 2015) aimed at evaluating the seismic potential of the Marmara region to derive a statistical distribution for nucleation position. Our results suggest that accounting for rupture related parameters in a PSHA using deterministic information from dynamic models is feasible and in particular, the use of a non-uniform statistical distribution for nucleation position has serious consequences on the hazard assessment. Since the directivity effect is conditional on the nucleation position the hazard map changes with the assumptions made. A worst case scenario (both the faults are rupturing towards the city of Istanbul) predicts up to 25% change than the standard formulation at 2 sec and increases with longer periods. The former result is heavily different if a deterministically based nucleation position is assumed.
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
Intelligent seismic risk mitigation system on structure building
NASA Astrophysics Data System (ADS)
Suryanita, R.; Maizir, H.; Yuniorto, E.; Jingga, H.
2018-01-01
Indonesia located on the Pacific Ring of Fire, is one of the highest-risk seismic zone in the world. The strong ground motion might cause catastrophic collapse of the building which leads to casualties and property damages. Therefore, it is imperative to properly design the structural response of building against seismic hazard. Seismic-resistant building design process requires structural analysis to be performed to obtain the necessary building responses. However, the structural analysis could be very difficult and time consuming. This study aims to predict the structural response includes displacement, velocity, and acceleration of multi-storey building with the fixed floor plan using Artificial Neural Network (ANN) method based on the 2010 Indonesian seismic hazard map. By varying the building height, soil condition, and seismic location in 47 cities in Indonesia, 6345 data sets were obtained and fed into the ANN model for the learning process. The trained ANN can predict the displacement, velocity, and acceleration responses with up to 96% of predicted rate. The trained ANN architecture and weight factors were later used to build a simple tool in Visual Basic program which possesses the features for prediction of structural response as mentioned previously.
Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker
Iasonos, Alexia; Chapman, Paul B.; Satagopan, Jaya M.
2016-01-01
There is an increased interest in finding predictive biomarkers that can guide treatment options for both mutation carriers and non-carriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time to event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a Proportional Hazards regression model is commonly used as a measure of variation in treatment benefit. While this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus non-carriers. We illustrate the use and interpretation of the proposed measures using data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. PMID:27141007
NASA Astrophysics Data System (ADS)
Yilmaz, Zeynep
Typically, the vertical component of the ground motion is not considered explicitly in seismic design of bridges, but in some cases the vertical component can have a significant effect on the structural response. The key question of when the vertical component should be incorporated in design is answered by the probabilistic seismic hazard assessment study incorporating the probabilistic seismic demand models and ground motion models. Nonlinear simulation models with varying configurations of an existing bridge in California were considered in the analytical study. The simulation models were subjected to the set of selected ground motions in two stages: at first, only horizontal components of the motion were applied; while in the second stage the structures were subjected to both horizontal and vertical components applied simultaneously and the ground motions that produced the largest adverse effects on the bridge system were identified. Moment demand in the mid-span and at the support of the longitudinal girder and the axial force demand in the column are found to be significantly affected by the vertical excitations. These response parameters can be modeled using simple ground motion parameters such as horizontal spectral acceleration and vertical spectral acceleration within 5% to 30% error margin depending on the type of the parameter and the period of the structure. For a complete hazard assessment, both of these ground motion parameters explaining the structural behavior should also be modeled. For the horizontal spectral acceleration, Abrahamson and Silva (2008) model was used within many available standard model. A new NGA vertical ground motion model consistent with the horizontal model was constructed. These models are combined in a vector probabilistic seismic hazard analyses. Series of hazard curves developed and presented for different locations in Bay Area for soil site conditions to provide a roadmap for the prediction of these features for future earthquakes. Findings from this study will contribute to the development of revised guidelines to address vertical ground motion effects, particularly in the near fault regions, in the seismic design of highway bridges.
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan
2015-02-01
Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.
New Orleans After Hurricane Katrina: An Unnatural Disaster?
NASA Astrophysics Data System (ADS)
McNamara, D.; Werner, B.; Kelso, A.
2005-12-01
Motivated by destruction in New Orleans following hurricane Katrina, we use a numerical model to explore how natural processes, economic development, hazard mitigation measures and policy decisions intertwine to produce long periods of quiescence punctuated by disasters of increasing magnitude. Physical, economic and policy dynamics are modeled on a grid representing the subsiding Mississippi Delta region surrounding New Orleans. Water flow and resulting sediment erosion and deposition are simulated in response to prescribed river floods and storms. Economic development operates on a limited number of commodities and services such as agricultural products, oil and chemical industries and port services, with investment and employment responding to both local conditions and global constraints. Development permitting, artificial levee construction and pumping are implemented by policy agents who weigh predicted economic benefits (tax revenue), mitigation costs and potential hazards. Economic risk is reduced by a combination of private insurance, federal flood insurance and disaster relief. With this model, we simulate the initiation and growth of New Orleans coupled with an increasing level of protection from a series of flooding events. Hazard mitigation filters out small magnitude events, but terrain and hydrological modifications amplify the impact of large events. In our model, "natural disasters" are the inevitable outcome of the mismatch between policy based on short-time-scale economic calculations and stochastic forcing by infrequent, high-magnitude flooding events. A comparison of the hazard mitigation response to river- and hurricane-induced flooding will be discussed. Supported by NSF Geology and Paleontology and the Andrew W Mellon Foundation.
NASA Astrophysics Data System (ADS)
Macedonio, Giovanni; Costa, Antonio; Scollo, Simona; Neri, Augusto
2015-04-01
Uncertainty in the tephra fallout hazard assessment may depend on different meteorological datasets and eruptive source parameters used in the modelling. We present a statistical study to analyze this uncertainty in the case of a sub-Plinian eruption of Vesuvius of VEI = 4, column height of 18 km and total erupted mass of 5 × 1011 kg. The hazard assessment for tephra fallout is performed using the advection-diffusion model Hazmap. Firstly, we analyze statistically different meteorological datasets: i) from the daily atmospheric soundings of the stations located in Brindisi (Italy) between 1962 and 1976 and between 1996 and 2012, and in Pratica di Mare (Rome, Italy) between 1996 and 2012; ii) from numerical weather prediction models of the National Oceanic and Atmospheric Administration and of the European Centre for Medium-Range Weather Forecasts. Furthermore, we modify the total mass, the total grain-size distribution, the eruption column height, and the diffusion coefficient. Then, we quantify the impact that different datasets and model input parameters have on the probability maps. Results shows that the parameter that mostly affects the tephra fallout probability maps, keeping constant the total mass, is the particle terminal settling velocity, which is a function of the total grain-size distribution, particle density and shape. Differently, the evaluation of the hazard assessment weakly depends on the use of different meteorological datasets, column height and diffusion coefficient.
Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears
Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.
2004-01-01
Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.
Bogani, Giorgio; Cromi, Antonella; Serati, Maurizio; Uccella, Stefano; Donato, Violante Di; Casarin, Jvan; Naro, Edoardo Di; Ghezzi, Fabio
2017-06-01
To identify factors predicting for recurrence in vulvar cancer patients undergoing surgical treatment. We retrospectively evaluated data of consecutive patients with squamous cell vulvar cancer treated between January 1, 1990 and December 31, 2013. Basic descriptive statistics and multivariable analysis were used to design predicting models influencing outcomes. Five-year disease-free survival (DFS) and overall survival (OS) were analyzed using the Cox model. The study included 101 patients affected by vulvar cancer: 64 (63%) stage I, 12 (12%) stage II, 20 (20%) stage III, and 5 (5%) stage IV. After a mean (SD) follow-up of 37.6 (22.1) months, 21 (21%) recurrences occurred. Local, regional, and distant failures were recorded in 14 (14%), 6 (6%), and 3 (3%) patients, respectively. Five-year DFS and OS were 77% and 82%, respectively. At multivariate analysis only stromal invasion >2 mm (hazard ratio: 4.9 [95% confidence interval, 1.17-21.1]; P=0.04) and extracapsular lymph node involvement (hazard ratio: 9.0 (95% confidence interval, 1.17-69.5); P=0.03) correlated with worse DFS, although no factor independently correlated with OS. Looking at factors influencing local and regional failure, we observed that stromal invasion >2 mm was the only factor predicting for local recurrence, whereas lymph node extracapsular involvement predicted for regional recurrence. Stromal invasion >2 mm and lymph node extracapsular spread are the most important factors predicting for local and regional failure, respectively. Studies evaluating the effectiveness of adjuvant treatment in high-risk patients are warranted.
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Snyder, James
2014-01-01
Objective Demonstrate multivariate multilevel survival analysis within a larger structural equation model. Test the 3 hypotheses that when confronted by a negative parent, child rates of angry, sad/fearful, and positive emotion will increase, decrease, and stay the same, respectively, for antisocial compared with normal children. This same pattern will predict increases in future antisocial behavior. Methods Parent–child dyads were videotaped in the fall of kindergarten in the laboratory and antisocial behavior ratings were obtained in the fall of kindergarten and third grade. Results Kindergarten antisocial predicted less child sad/fear and child positive but did not predict child anger given parent negative. Less child positive and more child neutral given parent negative predicted increases in third-grade antisocial behavior. Conclusions The model is a useful analytic tool for studying rates of social behavior. Lack of positive affect or excess neutral affect may be a new risk factor for child antisocial behavior. PMID:24133296
The SIST-M: Predictive validity of a brief structured Clinical Dementia Rating interview
Okereke, Olivia I.; Pantoja-Galicia, Norberto; Copeland, Maura; Hyman, Bradley T.; Wanggaard, Taylor; Albert, Marilyn S.; Betensky, Rebecca A.; Blacker, Deborah
2011-01-01
Background We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI). Objective To test predictive validity of the SIST-M. Methods Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves. Results AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB. Conclusion CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview. PMID:21986342
Petersen, M.D.; Dewey, J.; Hartzell, S.; Mueller, C.; Harmsen, S.; Frankel, A.D.; Rukstales, K.
2004-01-01
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.
A Bayesian method to rank different model forecasts of the same volcanic ash cloud: Chapter 24
Denlinger, Roger P.; Webley, P.; Mastin, Larry G.; Schwaiger, Hans F.
2012-01-01
Volcanic eruptions often spew fine ash high into the atmosphere, where it is carried downwind, forming long ash clouds that disrupt air traffic and pose a hazard to air travel. To mitigate such hazards, the community studying ash hazards must assess risk of ash ingestion for any flight path and provide robust and accurate forecasts of volcanic ash dispersal. We provide a quantitative and objective method to evaluate the efficacy of ash dispersal estimates from different models, using Bayes theorem to assess the predictions that each model makes about ash dispersal. We incorporate model and measurement uncertainty and produce a posterior probability for model input parameters. The integral of the posterior over all possible combinations of model inputs determines the evidence for each model and is used to compare models. We compare two different types of transport models, an Eulerian model (Ash3d) and a Langrangian model (PUFF), as applied to the 2010 eruptions of Eyjafjallajökull volcano in Iceland. The evidence for each model benefits from common physical characteristics of ash dispersal from an eruption column and provides a measure of how well each model forecasts cloud transport. Given the complexity of the wind fields, we find that the differences between these models depend upon the differences in the way the models disperse ash into the wind from the source plume. With continued observation, the accuracy of the estimates made by each model increases, increasing the efficacy of each model’s ability to simulate ash dispersal.
Evaluation of the Emergency Response Dose Assessment System(ERDAS)
NASA Technical Reports Server (NTRS)
Evans, Randolph J.; Lambert, Winifred C.; Manobianco, John T.; Taylor, Gregory E.; Wheeler, Mark M.; Yersavich, Ann M.
1996-01-01
The emergency response dose assessment system (ERDAS) is a protype software and hardware system configured to produce routine mesoscale meteorological forecasts and enhanced dispersion estimates on an operational basis for the Kennedy Space Center (KSC)/Cape Canaveral Air Station (CCAS) region. ERDAS provides emergency response guidance to operations at KSC/CCAS in the case of an accidental hazardous material release or an aborted vehicle launch. This report describes the evaluation of ERDAS including: evaluation of sea breeze predictions, comparison of launch plume location and concentration predictions, case study of a toxic release, evaluation of model sensitivity to varying input parameters, evaluation of the user interface, assessment of ERDA's operational capabilities, and a comparison of ERDAS models to the ocean breeze dry gultch diffusion model.
Bokhorst, Stef; Pedersen, Stine Højlund; Brucker, Ludovic; Anisimov, Oleg; Bjerke, Jarle W; Brown, Ross D; Ehrich, Dorothee; Essery, Richard L H; Heilig, Achim; Ingvander, Susanne; Johansson, Cecilia; Johansson, Margareta; Jónsdóttir, Ingibjörg Svala; Inga, Niila; Luojus, Kari; Macelloni, Giovanni; Mariash, Heather; McLennan, Donald; Rosqvist, Gunhild Ninis; Sato, Atsushi; Savela, Hannele; Schneebeli, Martin; Sokolov, Aleksandr; Sokratov, Sergey A; Terzago, Silvia; Vikhamar-Schuler, Dagrun; Williamson, Scott; Qiu, Yubao; Callaghan, Terry V
2016-09-01
Snow is a critically important and rapidly changing feature of the Arctic. However, snow-cover and snowpack conditions change through time pose challenges for measuring and prediction of snow. Plausible scenarios of how Arctic snow cover will respond to changing Arctic climate are important for impact assessments and adaptation strategies. Although much progress has been made in understanding and predicting snow-cover changes and their multiple consequences, many uncertainties remain. In this paper, we review advances in snow monitoring and modelling, and the impact of snow changes on ecosystems and society in Arctic regions. Interdisciplinary activities are required to resolve the current limitations on measuring and modelling snow characteristics through the cold season and at different spatial scales to assure human well-being, economic stability, and improve the ability to predict manage and adapt to natural hazards in the Arctic region.
NASA Technical Reports Server (NTRS)
Bokhorst, Stef; Pedersen, Stine Hojlund; Brucker, Ludovic; Anisimov, Oleg; Bjerke, Jarle W.; Brown, Ross D.; Ehrich, Dorothee; Essery, Richard L. H.; Heilig, Achim; Ingvander, Susanne;
2016-01-01
Snow is a critically important and rapidly changing feature of the Arctic. However, snow-cover and snowpack conditions change through time pose challenges for measuring and prediction of snow. Plausible scenarios of how Arctic snow cover will respond to changing Arctic climate are important for impact assessments and adaptation strategies. Although much progress has been made in understanding and predicting snow-cover changes and their multiple consequences, many uncertainties remain. In this paper, we review advances in snow monitoring and modelling, and the impact of snow changes on ecosystems and society in Arctic regions. Interdisciplinary activities are required to resolve the current limitations on measuring and modelling snow characteristics through the cold season and at different spatial scales to assure human well-being, economic stability, and improve the ability to predict manage and adapt to natural hazards in the Arctic region.
Revised seismic hazard map for the Kyrgyz Republic
NASA Astrophysics Data System (ADS)
Fleming, Kevin; Ullah, Shahid; Parolai, Stefano; Walker, Richard; Pittore, Massimiliano; Free, Matthew; Fourniadis, Yannis; Villiani, Manuela; Sousa, Luis; Ormukov, Cholponbek; Moldobekov, Bolot; Takeuchi, Ko
2017-04-01
As part of a seismic risk study sponsored by the World Bank, a revised seismic hazard map for the Kyrgyz Republic has been produced, using the OpenQuake-engine developed by the Global Earthquake Model Foundation (GEM). In this project, an earthquake catalogue spanning a period from 250 BCE to 2014 was compiled and processed through spatial and temporal declustering tools. The territory of the Kyrgyz Republic was divided into 31 area sources defined based on local seismicity, including a total area covering 200 km from the border. The results are presented in terms of Peak Ground Acceleration (PGA). In addition, macroseismic intensity estimates, making use of recent intensity prediction equations, were also provided, given that this measure is still widely used in Central Asia. In order to accommodate the associated epistemic uncertainty, three ground motion prediction equations were used in a logic tree structure. A set of representative earthquake scenarios were further identified based on historical data and the nature of the considered faults. The resulting hazard map, as expected, follows the country's seismicity, with the highest levels of hazard in the northeast, south and southwest of the country, with an elevated part around the centre. When considering PGA, the hazard is slightly greater for major urban centres than in previous works (e.g., Abdrakhmatov et al., 2003), although the macroseismic intensity estimates are less than previous studies, e.g., Ulomov (1999). For the scenario assessments, the examples that most affect the urban centres assessed are the Issyk Ata fault (in particular for Bishkek), the Chilik and Kemin faults (in particular Balykchy and Karakol), the Ferghana Valley fault system (in particular Osh, Jalah-Abad and Uzgen), the Oinik Djar fault (Naryn) and the central and western Talas-Ferghanafaukt (Talas). Finally, while site effects (in particular, those dependent on the upper-most geological structure) have an obvious effect on the final hazard level, this is still not fully accounted for, even if a nation-wide first order Vs30 model (i.e., from the USGS) is available. Abdrakhmatov, K., Havenith, H.-B., Delvaux, D., Jongsmans, D. and Trefois, P. (2003) Probabilistic PGA and Arias Intensity maps of Kyrgyzstan (Central Asia), Journal of Seismology, 7, 203-220. Ulomov, V.I., The GSHAP Region 7 working group (1999) Seismic hazard of Northern Eurasia, Annali di Geofisica, 42, 1012-1038.
A statistical approach to evaluate flood risk at the regional level: an application to Italy
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate flood risk statistical characterization, the proposed procedure could be applied straightforward outside the national borders, particularly in areas with similar geo-environmental settings.
Developing confidence in adverse outcome pathway-based ...
An adverse outcome pathway (AOP) description linking inhibition of aromatase (cytochrome P450 [cyp] 19) to reproductive dysfunction was reviewed for scientific and technical quality and endorsed by the OECD. An intended application of the AOP framework is to support the use of mechanistic or pathway-based data to infer or predict chemical hazards and apical adverse outcomes. As part of this work, ToxCast high throughput screening data were used to identify a chemicals’ ability to inhibit aromatase activity in vitro. Twenty-four hour in vivo exposures, focused on effects on production and circulating concentrations of 17β-estradiol (E2), key events in the AOP, were conducted to verify in vivo activity. Based on these results, imazalil was selected as a case study chemical to test an AOP-based hazard prediction. A computational model of the fish hypothalamic-pituitary-gonadal-liver axis and a statistically-based model of oocyte growth dynamics were used to predict impacts of different concentrations of imazalil on multiple key events along the AOP, assuming continuous exposure for 21 d. Results of the model simulations were used to select test concentrations and design a fathead minnow reproduction study in which fish were exposed to 20, 60, or 200 µg imazalil/L for durations of 2.5, 10, or 21d. Within 60 h of exposure, female fathead minnows showed significant reductions in ex vivo production of E2, circulating E2 concentrations, and significant increases in
Developing confidence in adverse outcome pathway-based ...
An adverse outcome pathway (AOP) description linking inhibition of aromatase (cytochrome P450 [cyp] 19) to reproductive dysfunction was reviewed for scientific and technical quality and endorsed by the OECD (https://aopwiki.org/wiki/index.php/Aop:25). An intended application of the AOP framework is to support the use of mechanistic or pathway-based data to infer or predict chemical hazards and apical adverse outcomes. As part of this work, ToxCast high throughput screening data were used to identify a chemicals’ ability to inhibit aromatase activity in vitro. Twenty-four hour in vivo exposures, focused on effects on production and circulating concentrations of 17â-estradiol (E2), key events in the AOP, were conducted to verify in vivo activity. Based on these results, imazalil was selected as a case study chemical to test an AOP-based hazard prediction. A computational model of the fish hypothalamic-pituitary-gonadal-liver axis and a statistically-based model of oocyte growth dynamics were used to predict impacts of different concentrations of imazalil on multiple key events along the AOP, assuming continuous exposure for 21 d. Results of the model simulations were used to select test concentrations and design a fathead minnow reproduction study in which fish were exposed to 20, 60, or 200 µg imazalil/L for durations of 2.5, 10, or 21d. Within 60 h of exposure, female fathead minnows showed significant reductions in ex vivo production of E2, circulating E2 c
van der Fels-Klerx, H J; Booij, C J H
2010-06-01
This article provides an overview of available systems for management of Fusarium mycotoxins in the cereal grain supply chain, with an emphasis on the use of predictive mathematical modeling. From the state of the art, it proposes future developments in modeling and management and their challenges. Mycotoxin contamination in cereal grain-based feed and food products is currently managed and controlled by good agricultural practices, good manufacturing practices, hazard analysis critical control points, and by checking and more recently by notification systems and predictive mathematical models. Most of the predictive models for Fusarium mycotoxins in cereal grains focus on deoxynivalenol in wheat and aim to help growers make decisions about the application of fungicides during cultivation. Future developments in managing Fusarium mycotoxins should include the linkage between predictive mathematical models and geographical information systems, resulting into region-specific predictions for mycotoxin occurrence. The envisioned geographically oriented decision support system may incorporate various underlying models for specific users' demands and regions and various related databases to feed the particular models with (geographically oriented) input data. Depending on the user requirements, the system selects the best fitting model and available input information. Future research areas include organizing data management in the cereal grain supply chain, developing predictive models for other stakeholders (taking into account the period up to harvest), other Fusarium mycotoxins, and cereal grain types, and understanding the underlying effects of the regional component in the models.
Yeh, Hsin-Chih; Jan, Hau-Chern; Wu, Wen-Jeng; Li, Ching-Chia; Li, Wei-Ming; Ke, Hung-Lung; Huang, Shu-Pin; Liu, Chia-Chu; Lee, Yung-Chin; Yang, Sheau-Fang; Liang, Peir-In; Huang, Chun-Nung
2015-01-01
Objectives To investigate the impact of preoperative hydronephrosis and flank pain on prognosis of patients with upper tract urothelial carcinoma. Methods In total, 472 patients with upper tract urothelial carcinoma managed by radical nephroureterectomy were included from Kaohsiung Medical University Hospital Healthcare System. Clinicopathological data were collected retrospectively for analysis. The significance of hydronephrosis, especially when combined with flank pain, and other relevant factors on overall and cancer-specific survival were evaluated. Results Of the 472 patients, 292 (62%) had preoperative hydronephrosis and 121 (26%) presented with flank pain. Preoperative hydronephrosis was significantly associated with age, hematuria, flank pain, tumor location, and pathological tumor stage. Concurrent presence of hydronephrosis and flank pain was a significant predictor of non-organ-confined disease (multivariate-adjusted hazard ratio = 2.10, P = 0.025). Kaplan-Meier analysis showed significantly poorer overall and cancer-specific survival in patients with preoperative hydronephrosis (P = 0.005 and P = 0.026, respectively) and in patients with flank pain (P < 0.001 and P = 0.001, respectively) than those without. However, only simultaneous hydronephrosis and flank pain independently predicted adverse outcome (hazard ratio = 1.98, P = 0.016 for overall survival and hazard ratio = 1.87, P = 0.036 for and cancer-specific survival, respectively) in multivariate Cox proportional hazards models. In addition, concurrent presence of hydronephrosis and flank pain was also significantly predictive of worse survival in patient with high grade or muscle-invasive disease. Notably, there was no difference in survival between patients with hydronephrosis but devoid of flank pain and those without hydronephrosis. Conclusion Concurrent preoperative presence of hydronephrosis and flank pain predicted non-organ-confined status of upper tract urothelial carcinoma. When accompanied with flank pain, hydronephrosis represented an independent predictor for worse outcome in patients with upper tract urothelial carcinoma. PMID:26469704
Kollert, Florian; Tippelt, Andrea; Müller, Carolin; Jörres, Rudolf A; Porzelius, Christine; Pfeifer, Michael; Budweiser, Stephan
2013-07-01
In patients with COPD, chronic anemia is known as an unfavorable prognostic factor. Whether the association between hemoglobin (Hb) levels and long-term survival is restricted to anemia or extends to higher Hb levels has not yet been systematically assessed. We determined Hb levels in 309 subjects with COPD and chronic respiratory failure prior to initiation of noninvasive ventilation, accounting for confounders that might affect Hb. Subjects were categorized as anemic (Hb < 12 g/dL in females, Hb < 13 g/dL in males), polycythemic (Hb ≥ 15 g/dL in females, Hb ≥ 17 g/dL in males), or normocythemic. In addition, percentiles of Hb values were analyzed with regard to mortality from any cause. Two-hundred seven subjects (67.0%) showed normal Hb levels, 46 (14.9%) had anemia, and 56 (18.1%) had polycythemia. Polycythemic subjects showed a higher survival rate than anemic (P = .01) and normocythemic subjects (P = .043). In a univariate Cox hazards model, Hb was associated with long-term survival (hazard ratio 0.855; 95% CI 0.783-0.934, P < .001). The 58th percentiles of Hb (14.3 g/dL in females, 15.1 g/dL in males) yielded the highest discriminative value for predicting survival (hazard ratio 0.463, 95% CI 0.324-0.660, P < .001). In the multivariate analysis this cutoff was an independent predictor for survival (hazard ratio 0.627, 95% CI 0.414-0.949, P = .03), in addition to age and body mass index. In subjects with COPD and chronic respiratory failure undergoing treatment with noninvasive ventilation and LTOT, high Hb levels are associated with better long-term survival. The optimal cutoff level for prediction was above the established threshold defining anemia. Thus, predicting survival only on the basis of anemia does not fully utilize the prognostic potential of Hb values in COPD.
Brown, David J.; Orelien, Jean; Gordon, John D.; Chu, Andrew C.; Chu, Michael D.; Nakamura, Masafumi; Handa, Hiroshi; Kayama, Fujio; Denison, Michael S.; Clark, George C.
2010-01-01
Remediation of hazardous waste sites requires efficient and cost-effective methods to assess the extent of contamination by toxic substances including dioxin-like chemicals. Traditionally, dioxin-like contamination has been assessed by gas chromatography/high-resolution mass spectrometry (GC/MS) analysis for specific polychlorinated dibenzo-p-dioxins, dibenzofurans, and biphenyl congeners. Toxic equivalency factors for these congeners are then used to estimate the overall dioxin toxic equivalency (TEQ) of complex mixtures found in samples. The XDS-CALUX bioassay estimates contamination by dioxin-like chemicals in a sample extract by measuring expression of a sensitive reporter gene in genetically engineered cells. The output of the XDS-CALUX assay is a CALUX-TEQ value, calibrated based on TCDD standards. Soil samples taken from a variety of hazardous waste sites were measured using the XDS-CALUX bioassay and GC/MS. TEQ and CALUX-TEQ from these methods were compared, and a mathematical model was developed describing the relationship between these two data sets: log(TEQ) = 0.654 × log(CALUX-TEQ) + 0.058-(log(CALUX-TEQ))2. Applying this equation to these samples showed that predicted and GC/MS measured TEQ values strongly correlate (R2 = 0.876) and that TEQ values predicted from CALUX-TEQ were on average nearly identical to the GC/MS-TEQ. The ability of XDS-CALUX bioassay data to predict GC/MS-derived TEQ data should make this procedure useful in risk assessment and management decisions. PMID:17626436
Redman, A D; Butler, J D; Letinski, D J; Di Toro, D M; Leon Paumen, M; Parkerton, T F
2018-05-01
Solid-phase microextraction fibers coated with polydimethylsiloxane (PDMS) provide a convenient passive sampling format to characterize bioavailability of petroleum substances. Hydrocarbons absorb onto PDMS in proportion to both freely dissolved concentrations and partitioning properties of the individual constituents, which parallels the mechanistic basis used to predict aquatic toxicity in the PETROTOX model. When deployed in a non-depletive manner, combining SPME with thermal desorption and quantification using gas chromatography-flame ionization creates a biomimetic extraction (BE) procedure that has the potential to simplify aquatic hazard assessments of petroleum substances since the total moles of all hydrocarbons sorbed to the fiber can be related to toxic thresholds in target lipid of aquatic organisms. The objective of this work is to describe the technical basis for applying BE measurements to predict toxicity of petroleum substances. Critical BE-based PDMS concentrations corresponding to adverse effects were empirically derived from toxicity tests on different petroleum substances with multiple test species. The resulting species sensitivity distribution (SSD) of PDMS effect concentrations was then compared and found consistent with the previously reported target lipid-based SSD. Further, BE data collected on samples of aqueous media dosed with a wide range of petroleum substances were highly correlated to predicted toxic units derived using the PETROTOX model. These findings provide justification for applying BE in environmental hazard and risk evaluations of petroleum substances and related mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
Assessment of volcanic hazards, vulnerability, risk and uncertainty (Invited)
NASA Astrophysics Data System (ADS)
Sparks, R. S.
2009-12-01
A volcanic hazard is any phenomenon that threatens communities . These hazards include volcanic events like pyroclastic flows, explosions, ash fall and lavas, and secondary effects such as lahars and landslides. Volcanic hazards are described by the physical characteristics of the phenomena, by the assessment of the areas that they are likely to affect and by the magnitude-dependent return period of events. Volcanic hazard maps are generated by mapping past volcanic events and by modelling the hazardous processes. Both these methods have their strengths and limitations and a robust map should use both approaches in combination. Past records, studied through stratigraphy, the distribution of deposits and age dating, are typically incomplete and may be biased. Very significant volcanic hazards, such as surge clouds and volcanic blasts, are not well-preserved in the geological record for example. Models of volcanic processes are very useful to help identify hazardous areas that do not have any geological evidence. They are, however, limited by simplifications and incomplete understanding of the physics. Many practical volcanic hazards mapping tools are also very empirical. Hazards maps are typically abstracted into hazards zones maps, which are some times called threat or risk maps. Their aim is to identify areas at high levels of threat and the boundaries between zones may take account of other factors such as roads, escape routes during evacuation, infrastructure. These boundaries may change with time due to new knowledge on the hazards or changes in volcanic activity levels. Alternatively they may remain static but implications of the zones may change as volcanic activity changes. Zone maps are used for planning purposes and for management of volcanic crises. Volcanic hazards maps are depictions of the likelihood of future volcanic phenomena affecting places and people. Volcanic phenomena are naturally variable, often complex and not fully understood. There are many sources of uncertainty in forecasting the areas that volcanic activity will effect and the severity of the effects. Uncertainties arise from: natural variability, inadequate data, biased data, incomplete data, lack of understanding of the processes, limitations to predictive models, ambiguity, and unknown unknowns. The description of volcanic hazards is thus necessarily probabilistic and requires assessment of the attendant uncertainties. Several issues arise from the probabilistic nature of volcanic hazards and the intrinsic uncertainties. Although zonation maps require well-defined boundaries for administrative pragmatism, such boundaries cannot divide areas that are completely safe from those that are unsafe. Levels of danger or safety need to be defined to decide on and justify boundaries through the concepts of vulnerability and risk. More data, better observations, improved models may reduce uncertainties, but can increase uncertainties and may lead to re-appraisal of zone boundaries. Probabilities inferred by statistical techniques are hard to communicate. Expert elicitation is an emerging methodology for risk assessment and uncertainty evaluation. The method has been applied at one major volcanic crisis (Soufrière Hills Volcano, Montserrat), and is being applied in planning for volcanic crises at Vesuvius.
Liver Surface Nodularity Score Allows Prediction of Cirrhosis Decompensation and Death.
Smith, Andrew D; Zand, Kevin A; Florez, Edward; Sirous, Reza; Shlapak, Darya; Souza, Frederico; Roda, Manohar; Bryan, Jason; Vasanji, Amit; Griswold, Michael; Lirette, Seth T
2017-06-01
Purpose To determine whether use of the liver surface nodularity (LSN) score, a quantitative biomarker derived from routine computed tomographic (CT) images, allows prediction of cirrhosis decompensation and death. Materials and Methods For this institutional review board-approved HIPAA-compliant retrospective study, adult patients with cirrhosis and Model for End-Stage Liver Disease (MELD) score within 3 months of initial liver CT imaging between January 3, 2006, and May 30, 2012, were identified from electronic medical records (n = 830). The LSN score was measured by using CT images and quantitative software. Competing risk regression was used to determine the association of the LSN score with hepatic decompensation and overall survival. A risk model combining LSN scores (<3 or ≥3) and MELD scores (<10 or ≥10) was created for predicting liver-related events. Results In patients with compensated cirrhosis, 40% (129 of 326) experienced decompensation during a median follow-up period of 4.22 years. After adjustment for competing risks including MELD score, LSN score (hazard ratio, 1.38; 95% confidence interval: 1.06, 1.79) was found to be independently predictive of hepatic decompensation. Median times to decompensation of patients at high (1.76 years, n = 48), intermediate (3.79 years, n = 126), and low (6.14 years, n = 152) risk of hepatic decompensation were significantly different (P < .001). Among the full cohort with compensated or decompensated cirrhosis, 61% (504 of 830) died during the median follow-up period of 2.26 years. After adjustment for competing risks, LSN score (hazard ratio, 1.22; 95% confidence interval: 1.11, 1.33) and MELD score (hazard ratio, 1.08; 95% confidence interval: 1.06, 1.11) were found to be independent predictors of death. Median times to death of patients at high (0.94 years, n = 315), intermediate (2.79 years, n = 312), and low (4.69 years, n = 203) risk were significantly different (P < .001). Conclusion The LSN score derived from routine CT images allows prediction of cirrhosis decompensation and death. © RSNA, 2016 Online supplemental material is available for this article.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2004-01-01
During the grant period, several tasks were performed in support of the NASA Turbulence Prediction and Warning Systems (TPAWS) program. The primary focus of the research was on characterizing the preturbulence environment by developing predictive tools and simulating atmospheric conditions that preceded severe turbulence. The goal of the research being to provide both dynamical understanding of conditions that preceded turbulence as well as providing predictive tools in support of operational NASA B-757 turbulence research flights. The advancements in characterizing the preturbulence environment will be applied by NASA to sensor development for predicting turbulence onboard commercial aircraft. Numerical simulations with atmospheric models as well as multi-scale observational analyses provided insights into the environment organizing turbulence in a total of forty-eight specific case studies of severe accident producing turbulence on commercial aircraft. These accidents exclusively affected commercial aircraft. A paradigm was developed which diagnosed specific atmospheric circulation systems from the synoptic scale down to the meso-y scale that preceded turbulence in both clear air and in proximity to convection. The emphasis was primarily on convective turbulence as that is what the TPAWS program is most focused on in terms of developing improved sensors for turbulence warning and avoidance. However, the dynamical paradigm also has applicability to clear air and mountain turbulence. This dynamical sequence of events was then employed to formulate and test new hazard prediction indices that were first tested in research simulation studies and then ultimately were further tested in support of the NASA B-757 turbulence research flights. The new hazard characterization algorithms were utilized in a Real Time Turbulence Model (RTTM) that was operationally employed to support the NASA B-757 turbulence research flights. Improvements in the RTTM were implemented in an effort to increase the accuracy of the operational characterization of the preturbulence environment. Additionally, the initial research necessary to create a statistical evaluation scheme for the characterization indices utilized in the RTTM was undertaken. Results of all components of this research were then published in NASA contractor reports and scientific journal papers.
Palatini, Paolo; Reboldi, Gianpaolo; Beilin, Lawrence J; Eguchi, Kazuo; Imai, Yutaka; Kario, Kazuomi; Ohkubo, Takayoshi; Pierdomenico, Sante D; Saladini, Francesca; Schwartz, Joseph E; Wing, Lindon; Verdecchia, Paolo
2013-09-30
Data from prospective cohort studies regarding the association between ambulatory heart rate (HR) and cardiovascular events (CVE) are conflicting. To investigate whether ambulatory HR predicts CVE in hypertension, we performed 24-hour ambulatory blood pressure and HR monitoring in 7600 hypertensive patients aged 52 ± 16 years from Italy, U.S.A., Japan, and Australia, included in the 'ABP-International' registry. All were untreated at baseline examination. Standardized hazard ratios for ambulatory HRs were computed, stratifying for cohort, and adjusting for age, gender, blood pressure, smoking, diabetes, serum total cholesterol and serum creatinine. During a median follow-up of 5.0 years there were 639 fatal and nonfatal CVE. In a multivariable Cox model, night-time HR predicted fatal combined with nonfatal CVE more closely than 24h HR (p=0.007 and =0.03, respectively). Daytime HR and the night:day HR ratio were not associated with CVE (p=0.07 and =0.18, respectively). The hazard ratio of the fatal combined with nonfatal CVE for a 10-beats/min increment of the night-time HR was 1.13 (95% CI, 1.04-1.22). This relationship remained significant when subjects taking beta-blockers during the follow-up (hazard ratio, 1.15; 95% CI, 1.05-1.25) or subjects who had an event within 5 years after enrollment (hazard ratio, 1.23; 95% CI, 1.05-1.45) were excluded from analysis. At variance with previous data obtained from general populations, ambulatory HR added to the risk stratification for fatal combined with nonfatal CVE in the hypertensive patients from the ABP-International study. Night-time HR was a better predictor of CVE than daytime HR. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Edwards, Benjamin; Fäh, Donat
2017-11-01
Strong ground-motion databases used to develop ground-motion prediction equations (GMPEs) and calibrate stochastic simulation models generally include relatively few recordings on what can be considered as engineering rock or hard rock. Ground-motion predictions for such sites are therefore susceptible to uncertainty and bias, which can then propagate into site-specific hazard and risk estimates. In order to explore this issue we present a study investigating the prediction of ground motion at rock sites in Japan, where a wide range of recording-site types (from soil to very hard rock) are available for analysis. We employ two approaches: empirical GMPEs and stochastic simulations. The study is undertaken in the context of the PEGASOS Refinement Project (PRP), a Senior Seismic Hazard Analysis Committee (SSHAC) Level 4 probabilistic seismic hazard analysis of Swiss nuclear power plants, commissioned by swissnuclear and running from 2008 to 2013. In order to reduce the impact of site-to-site variability and expand the available data set for rock and hard-rock sites we adjusted Japanese ground-motion data (recorded at sites with 110 m s-1 < Vs30 < 2100 m s-1) to a common hard-rock reference. This was done through deconvolution of: (i) empirically derived amplification functions and (ii) the theoretical 1-D SH amplification between the bedrock and surface. Initial comparison of a Japanese GMPE's predictions with data recorded at rock and hard-rock sites showed systematic overestimation of ground motion. A further investigation of five global GMPEs' prediction residuals as a function of quarter-wavelength velocity showed that they all presented systematic misfit trends, leading to overestimation of median ground motions at rock and hard-rock sites in Japan. In an alternative approach, a stochastic simulation method was tested, allowing the direct incorporation of site-specific Fourier amplification information in forward simulations. We use an adjusted version of the model developed for Switzerland during the PRP. The median simulation prediction at true rock and hard-rock sites (Vs30 > 800 m s-1) was found to be comparable (within expected levels of epistemic uncertainty) to predictions using an empirical GMPE, with reduced residual misfit. As expected, due to including site-specific information in the simulations, the reduction in misfit could be isolated to a reduction in the site-related within-event uncertainty. The results of this study support the use of finite or pseudo-finite fault stochastic simulation methods in estimating strong ground motions in regions of weak and moderate seismicity, such as central and northern Europe. Furthermore, it indicates that weak-motion data has the potential to allow estimation of between- and within-site variability in ground motion, which is a critical issue in site-specific seismic hazard analysis, particularly for safety critical structures.
Gazica, Michele W; Spector, Paul E
2016-01-01
Safety climate, violence prevention climate, and civility climate were independently developed and linked to domain-specific workplace hazards, although all three were designed to promote the physical and psychological safety of workers. To test domain specificity between conceptually related workplace climates and relevant workplace hazards. Data were collected from 368 persons employed in various industries and descriptive statistics were calculated for all study variables. Correlational and relative weights analyses were used to test for domain specificity. The three climate domains were similarly predictive of most workplace hazards, regardless of domain specificity. This study suggests that the three climate domains share a common higher order construct that may predict relevant workplace hazards better than any of the scales alone.
Chuang, Michael L.; Gona, Philimon; Salton, Carol J.; Yeon, Susan B.; Kissinger, Kraig V.; Blease, Susan J.; Levy, Daniel; O'Donnell, Christopher J.; Manning, Warren J.
2013-01-01
We sought to determine whether depressed myocardial contraction fraction (MCF, the ratio of left ventricular (LV) stroke volume to myocardial volume) predicts cardiovascular disease (CVD) events in initially healthy adults. A subset (N=318, 60±9 yrs, 158 men) of the Framingham Heart Study Offspring cohort free of clinical CVD underwent volumetric cardiovascular magnetic resonance (CMR) imaging in 1998–1999. LV ejection fraction (EF), mass and MCF were determined. “Hard” CVD events comprised cardiovascular death, myocardial infarction, stroke or new heart failure. A Cox proportional hazards model adjusting for Framingham Coronary Risk Score (FCRS) was used to estimate hazard ratios for incident hard CVD events for sex-specific quartiles of MCF, LV mass and LVEF. The lowest quartile of LV mass and highest quartiles of MCF and EF served as referent. Kaplan-Meier survival plots and the log rank test were used to compare event-free survival. MCF was greater in women (0.58±0.13) than men (0.52±0.11), p<0.01. Nearly all (99%) participants had EF ≥ 0.55. Over up to 9-year (median 5.2) follow-up, 31 participants (10%) experienced an incident hard CVD event. Lowest-quartile MCF was 7 times more likely to develop hard CVD (hazard ratio 7.11, p=0.010) compared to the lowest quartile, and the elevated hazards persisted even after adjustment for LV mass (hazard ratio=6.09, p=0.020). The highest-quartile LV mass/height2.7 had nearly five-fold risk (hazard ratio 4.68, p=0.016). Event-free survival was shorter in lowest-quartile MCF, p = 0.0006, but not in lowest-quartile LVEF. Conclusion: In a cohort of adults initially without clinical CVD, lowest-quartile MCF conferred an increased hazard for hard CVD events after adjustment for traditional CVD risk factors and LV mass. PMID:22381161
Chuang, Michael L; Gona, Philimon; Salton, Carol J; Yeon, Susan B; Kissinger, Kraig V; Blease, Susan J; Levy, Daniel; O'Donnell, Christopher J; Manning, Warren J
2012-05-15
We sought to determine whether depressed myocardial contraction fraction (MCF; ratio of left ventricular [LV] stroke volume to myocardial volume) predicts cardiovascular disease (CVD) events in initially healthy adults. A subset (n = 318, 60 ± 9 years old, 158 men) of the Framingham Heart Study Offspring cohort free of clinical CVD underwent volumetric cardiovascular magnetic resonance imaging in 1998 through 1999. LV ejection fraction (EF), mass, and MCF were determined. "Hard" CVD events consisted of cardiovascular death, myocardial infarction, stroke, or new heart failure. A Cox proportional hazards model adjusting for Framingham Coronary Risk Score was used to estimate hazard ratios for incident hard CVD events for gender-specific quartiles of MCF, LV mass, and LVEF. The lowest quartile of LV mass and highest quartiles of MCF and EF served as referents. Kaplan-Meier survival plots and log-rank test were used to compare event-free survival. MCF was greater in women (0.58 ± 0.13) than in men (0.52 ± 0.11, p <0.01). Nearly all participants (99%) had EF ≥0.55. During an up to 9-year follow-up (median 5.2), 31 participants (10%) developed an incident hard CVD event. Lowest-quartile MCF was 7 times more likely to develop a hard CVD (hazard ratio 7.11, p = 0.010) compared to the remaining quartiles, and increased hazards persisted even after adjustment for LV mass (hazard ratio 6.09, p = 0.020). The highest-quartile LV mass/height 2.7 had a nearly fivefold risk (hazard ratio 4.68, p = 0.016). Event-free survival was shorter in lowest-quartile MCF (p = 0.0006) but not in lowest-quartile LVEF. In conclusion, in a cohort of adults initially without clinical CVD, lowest-quartile MCF conferred an increased hazard for hard CVD events after adjustment for traditional CVD risk factors and LV mass. Copyright © 2012 Elsevier Inc. All rights reserved.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
NASA Technical Reports Server (NTRS)
Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.
1998-01-01
New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.
A model for predicting life expectancy of children with cystic fibrosis.
Aurora, P; Wade, A; Whitmore, P; Whitehead, B
2000-12-01
In this study the authors aimed to produce a model for predicting the life expectancy of children with severe cystic fibrosis (CF) lung disease. The survival of 181 children with severe CF lung disease referred for transplantation assessment 1988-1998 (mean age 11.5 yrs, median survival without transplant 1.9 yrs from date of assessment) were studied. Proportional hazards modelling was used to identify assessment measurements that are of value in predicting longevity. The resultant model included low height predicted forced expiratory volume in one second (FEV1), low minimum oxygen saturation (Sa,O2min) during a 12-min walk, high age adjusted resting heart rate, young age, female sex, low plasma albumin, and low blood haemoglobin as predictors for poor prognosis. Extrapolation from the model suggests that a 12-yr old male child with an FEV1 of 30% pred and a Sa,O2min of 85% has a 44% risk of death within 2 yrs (95% confidence interval (CI) 35-54%), whilst a female child with the same measurements has a 63% risk of death (95% CI 52-73%) within the same period. The model produced may be of value in predicting the life expectancy of children with severe cystic fibrosis lung disease and in optimizing the timing of lung transplantation.
Modelling smoke transport from wildland fires: a review
Scott L. Goodrick; Gary L. Achtemeier; Narasimhan K. Larkin; Yongqiang Liu; Tara M. ( Strand
2012-01-01
Among the key issues in smoke management is predicting the magnitude and location of smoke effects. These vary in severity from hazardous (acute health conditions and drastic visibility impairment to transportation) to nuisance (regional haze), and occur across a range of scales (local to continental). Over the years a variety of tools have been developed to aid in...
Behaviour and effects of prescribed fire in masticated fuelbeds
Eric Knapp; J. Morgan Varner; Matt Busse; Carl Skinner; Carol Shestak
2011-01-01
Mechanical mastication converts shrub and small tree fuels into surface fuels, and this method is being widely used as a treatment to reduce fire hazard. The compactness of these fuelbeds is thought to moderate fire behaviour, but whether standard fuel models can accurately predict fire behaviour and effects is poorly understood. Prescribed burns were conducted in...
NASA Astrophysics Data System (ADS)
Stolarski, David J.; Cain, Clarence P.; Schuster, Kurt J.; Imholte, Michelle; Carothers, Val C.; Buffington, Gavin D.; Edwards, Michael; Thomas, Robert J.; Rockwell, Benjamin A.
2005-04-01
To assess the retinal hazards related to simultaneous exposure from two lasers of separate wavelengths, the retinal effects of 5-second laser irradiation from 532 nm and 647 nm were determined in non-human primates. A total of six eyes were exposed using equal amounts of power to determine the damage levels. The results were combined with those of previous, two-wavelength studies done by our group and compared to damage models developed in our lab. The data were also compared to the calculations resulting from use of the currently accepted method of predicting hazards from simultaneous lasing.
Pan, Feng; Reifsnider, Odette; Zheng, Ying; Proskorovsky, Irina; Li, Tracy; He, Jianming; Sorensen, Sonja V
2018-04-01
Treatment landscape in prostate cancer has changed dramatically with the emergence of new medicines in the past few years. The traditional survival partition model (SPM) cannot accurately predict long-term clinical outcomes because it is limited by its ability to capture the key consequences associated with this changing treatment paradigm. The objective of this study was to introduce and validate a discrete-event simulation (DES) model for prostate cancer. A DES model was developed to simulate overall survival (OS) and other clinical outcomes based on patient characteristics, treatment received, and disease progression history. We tested and validated this model with clinical trial data from the abiraterone acetate phase III trial (COU-AA-302). The model was constructed with interim data (55% death) and validated with the final data (96% death). Predicted OS values were also compared with those from the SPM. The DES model's predicted time to chemotherapy and OS are highly consistent with the final observed data. The model accurately predicts the OS hazard ratio from the final data cut (predicted: 0.74; 95% confidence interval [CI] 0.64-0.85 and final actual: 0.74; 95% CI 0.6-0.88). The log-rank test to compare the observed and predicted OS curves indicated no statistically significant difference between observed and predicted curves. However, the predictions from the SPM based on interim data deviated significantly from the final data. Our study showed that a DES model with properly developed risk equations presents considerable improvements to the more traditional SPM in flexibility and predictive accuracy of long-term outcomes. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Development of algal interspecies correlation estimation models for chemical hazard assessment.
Brill, Jessica L; Belanger, Scott E; Chaney, Joel G; Dyer, Scott D; Raimondo, Sandy; Barron, Mace G; Pittinger, Charles A
2016-09-01
Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potentially filling in data gaps for a variety of environmental assessment purposes. Web-ICE has historically been dominated by aquatic and terrestrial animal prediction models. Web-ICE models for algal species were essentially absent and are addressed in the present study. A compilation of public and private sector-held algal toxicity data were compiled and reviewed for quality based on relevant aspects of individual studies. Interspecies correlations were constructed from the most commonly tested algal genera for a broad spectrum of chemicals. The ICE regressions were developed based on acute 72-h and 96-h endpoint values involving 1647 unique studies on 476 unique chemicals encompassing 40 genera and 70 species of green, blue-green, and diatom algae. Acceptance criteria for algal ICE models were established prior to evaluation of individual models and included a minimum sample size of 3, a statistically significant regression slope, and a slope estimation parameter ≥0.65. A total of 186 ICE models were possible at the genus level, with 21 meeting quality criteria; and 264 ICE models were developed at the species level, with 32 meeting quality criteria. Algal ICE models will have broad utility in screening environmental hazard assessments, data gap filling in certain regulatory scenarios, and as supplemental information to derive species sensitivity distributions. Environ Toxicol Chem 2016;35:2368-2378. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
Numerical modelling of glacial lake outburst floods using physically based dam-breach models
NASA Astrophysics Data System (ADS)
Westoby, M. J.; Brasington, J.; Glasser, N. F.; Hambrey, M. J.; Reynolds, J. M.; Hassan, M. A. A. M.; Lowe, A.
2015-03-01
The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.
Molshatzki, Noa; Drory, Yaacov; Myers, Vicki; Goldbourt, Uri; Benyamini, Yael; Steinberg, David M; Gerber, Yariv
2011-07-01
The relationship of risk factors to outcomes has traditionally been assessed by measures of association such as odds ratio or hazard ratio and their statistical significance from an adjusted model. However, a strong, highly significant association does not guarantee a gain in stratification capacity. Using recently developed model performance indices, we evaluated the incremental discriminatory power of individual and neighborhood socioeconomic status (SES) measures after myocardial infarction (MI). Consecutive patients aged ≤65 years (N=1178) discharged from 8 hospitals in central Israel after incident MI in 1992 to 1993 were followed-up through 2005. A basic model (demographic variables, traditional cardiovascular risk factors, and disease severity indicators) was compared with an extended model including SES measures (education, income, employment, living with a steady partner, and neighborhood SES) in terms of Harrell c statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). During the 13-year follow-up, 326 (28%) patients died. Cox proportional hazards models showed that all SES measures were significantly and independently associated with mortality. Furthermore, compared with the basic model, the extended model yielded substantial gains (all P<0.001) in c statistic (0.723 to 0.757), NRI (15.2%), IDI (5.9%), and relative IDI (32%). Improvement was observed both for sensitivity (classification of events) and specificity (classification of nonevents). This study illustrates the additional insights that can be gained from considering the IDI and NRI measures of model performance and suggests that, among community patients with incident MI, incorporating SES measures into a clinical-based model substantially improves long-term mortality risk prediction.
NASA Astrophysics Data System (ADS)
England, John F.; Julien, Pierre Y.; Velleux, Mark L.
2014-03-01
Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.
Petersen, M.D.; Pankow, K.L.; Biasi, G.P.; Meremonte, M.
2008-01-01
The February 21, 2008 Wells, NV earthquake (M 6) was felt throughout eastern Nevada, southern Idaho, and western Utah. The town of Wells sustained significant damage to unreinforced masonry buildings. The earthquake occurred in a region of low seismic hazard with little seismicity, low geodetic strain rates, and few mapped faults. The peak horizontal ground acceleration predicted by the USGS National Seismic Hazard Maps is about 0.2 g at 2% probability of exceedance in 50 years, with the contributions coming mostly from the Ruby Mountain fault and background seismicity (M5-7.0). The hazard model predicts that the probability of occurrence of an M>6 event within 50 km of Wells is about 15% in 100 years. Although the earthquake was inside the USArray Transportable Array network, the nearest on-scale recordings of ground motions from the mainshock were too distant to estimate accelerations in town. The University of Nevada Reno, the University of Utah, and the U.S. Geological Survey deployed portable instruments to capture the ground motions from aftershocks of this rare normal-faulting event. Shaking from a M 4.7 aftershock recorded on portable instruments at distances less than 10 km exceeded 0.3 g, and sustained accelerations above 0.1 g lasted for about 5 seconds. For a magnitude 5 earthquake at 10 km distance the NGA equations predict median peak ground accelerations about 0.1 g. Ground motions from normal faulting earthquakes are poorly represented in the ground motion prediction equations. We compare portable and Transportable Array ground-motion recordings with prediction equations. Advanced National Seismic System stations in Utah recorded ground motions 250 km from the mainshock of about 2% g. The maximum ground motion recorded in Salt Lake City was in the center of the basin. We analyze the spatial variability of ground motions (rock vs. soil) and the influence of the Salt Lake Basin in modifying the ground motions. We then compare this data with the September 28, 2004 Parkfield aftershocks to contrast the differences between strike-slip and normal ground motions.
Modelling hazardous surface hoar layers in the mountain snowpack over space and time
NASA Astrophysics Data System (ADS)
Horton, Simon Earl
Surface hoar layers are a common failure layer in hazardous snow slab avalanches. Surface hoar crystals (frost) initially form on the surface of the snow, and once buried can remain a persistent weak layer for weeks or months. Avalanche forecasters have difficulty tracking the spatial distribution and mechanical properties of these layers in mountainous terrain. This thesis presents numerical models and remote sensing methods to track the distribution and properties of surface hoar layers over space and time. The formation of surface hoar was modelled with meteorological data by calculating the downward flux of water vapour from the atmospheric boundary layer. The timing of surface hoar formation and the modelled crystal size was verified at snow study sites throughout western Canada. The major surface hoar layers over several winters were predicted with fair success. Surface hoar formation was modelled over various spatial scales using meteorological data from weather forecast models. The largest surface hoar crystals formed in regions and elevation bands with clear skies, warm and humid air, cold snow surfaces, and light winds. Field surveys measured similar regional-scale patterns in surface hoar distribution. Surface hoar formation patterns on different slope aspects were observed, but were not modelled reliably. Mechanical field tests on buried surface hoar layers found layers increased in shear strength over time, but had persistent high propensity for fracture propagation. Layers with large crystals and layers overlying hard melt-freeze crusts showed greater signs of instability. Buried surface hoar layers were simulated with the snow cover model SNOWPACK and verified with avalanche observations, finding most hazardous surface hoar layers were identified with a structural stability index. Finally, the optical properties of surface hoar crystals were measured in the field with spectral instruments. Large plate-shaped crystals were less reflective at shortwave infrared wavelengths than other common surface snow grains. The methods presented in this thesis were developed into operational products that model hazardous surface hoar layers in western Canada. Further research and refinements could improve avalanche forecasts in regions prone to hazardous surface hoar layers.
Clinical Utility of Five Genetic Variants for Predicting Prostate Cancer Risk and Mortality
Salinas, Claudia A.; Koopmeiners, Joseph S.; Kwon, Erika M.; FitzGerald, Liesel; Lin, Daniel W.; Ostrander, Elaine A.; Feng, Ziding; Stanford, Janet L.
2009-01-01
Background A recent report suggests that the combination of five single-nucleotide polymorphisms (SNPs) at 8q24, 17q12, 17q24.3 and a family history of the disease may predict risk of prostate cancer. The present study tests the performance of these factors in prediction models for prostate cancer risk and prostate cancer-specific mortality. Methods SNPs were genotyped in population-based samples from Caucasians in King County, Washington. Incident cases (n=1308), aged 35–74, were compared to age-matched controls (n=1266) using logistic regression to estimate odds ratios (OR) associated with genotypes and family history. Cox proportional hazards models estimated hazard ratios for prostate cancer-specific mortality according to genotypes. Results The combination of SNP genotypes and family history was significantly associated with prostate cancer risk (ptrend=1.5 × 10−20). Men with ≥ five risk factors had an OR of 4.9 (95% CI 1.6 to 18.5) compared to men with none. However, this combination of factors did not improve the ROC curve after accounting for known risk predictors (i.e., age, serum PSA, family history). Neither the individual nor combined risk factors was associated with prostate cancer-specific mortality. Conclusion Genotypes for five SNPs plus family history are associated with a significant elevation in risk for prostate cancer and may explain up to 45% of prostate cancer in our population. However, they do not improve prediction models for assessing who is at risk of getting or dying from the disease, once known risk or prognostic factors are taken into account. Thus, this SNP panel may have limited clinical utility. PMID:19058137
A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.
Burgoon, Lyle D
2016-06-01
An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.
Wind-driven rain and its implications for natural hazard management
NASA Astrophysics Data System (ADS)
Marzen, Miriam; Iserloh, Thomas; de Lima, João L. M. P.; Fister, Wolfgang; Ries, Johannes B.
2017-04-01
Prediction and risk assessment of hydrological extremes are great challenges. Following climate predictions, frequent and violent rainstorms will become a new hazard to several regions in the medium term. Particularly agricultural soils will be severely threatened due to the combined action of heavy rainfall and accompanying winds on bare soil surfaces. Basing on the general underestimation of the effect of wind on rain erosion, conventional soil erosion measurements and modeling approaches lack related information to adequately calculate its impact. The presented experimental-empirical approach shows the powerful impact of wind on the erosive potential of rain. The tested soils had properties that characterise three different environments 1. Silty loam of semi-arid Mediterranean dryfarming and fallow, 2. clayey loam of humid agricultural sites and 3. cohesionless sandy substrates as found at coasts, dune fields and drift-sand areas. Erosion was found to increase by a factor of 1.3 to 7.1, depending on site characteristics. Complementary tests with a laboratory procedure were used to quantify explicitly the effect of wind on raindrop erosion as well as the influence of substrate, surface structure and slope on particle displacement. These tests confirmed the impact of wind-driven rain on total erosion rates to be of great importance when compared to all other tested factors. To successfully adapt soil erosion models to near-future challenges of climate change induced rain storms, wind-driven rain is supposed to be introduced into the hazard management agenda.
Maritime Tsunami Hazard Assessment in California
NASA Astrophysics Data System (ADS)
Lynett, P. J.; Borrero, J. C.; Wilson, R. I.; Miller, K. M.
2012-12-01
The California tsunami program in cooperation with NOAA and FEMA has begun implementing a plan to increase awareness of tsunami generated hazards to the maritime community (both ships and harbor infrastructure) through the development of in-harbor hazard maps, offshore safety zones for boater evacuation, and associated guidance for harbors and marinas before, during and following tsunamis. The hope is that the maritime guidance and associated education and outreach program will help save lives and reduce exposure of damage to boats and harbor infrastructure. An important step in this process is to understand the causative mechanism for damage in ports and harbors, and then ensure that the models used to generate hazard maps are able to accurately simulate these processes. Findings will be used to develop maps, guidance documents, and consistent policy recommendations for emergency managers and port authorities and provide information critical to real-time decisions required when responding to tsunami alert notifications. Basin resonance and geometric amplification are two reasonably well understood mechanisms for local magnification of tsunami impact in harbors, and are generally the mechanisms investigated when estimating the tsunami hazard potential in a port or harbor. On the other hand, our understanding of and predictive ability for currents is lacking. When a free surface flow is forced through a geometric constriction, it is readily expected that the enhanced potential gradient will drive strong, possibly unstable currents and the associated turbulent coherent structures such as "jets" and "whirlpools"; a simple example would be tidal flow through an inlet channel. However, these fundamentals have not been quantitatively connected with respect to understanding tsunami hazards in ports and harbors. A plausible explanation for this oversight is the observation that these features are turbulent phenomena with spatial and temporal scales much smaller than that of a typical tsunami. The ability to model and then validate these currentsdissect them has only recently become available through the evaluation of dozens of eyewitness accounts and hundreds of videos.developed. In this presentation, we will present ongoing work related to the application of such models to quantify the maritime tsunami hazard in select ports and harbors in California. The development of current-based tsunami hazard maps and safe-offshore-depth delineations will be discussed. We will also present an overview of the challenges in modeling tsunami currents, including capture of turbulent dynamics, coupling with tides, and issues with long-duration simulations. This work in California will form the basis for tsunami hazard reduction for all U.S. maritime communities through the National Tsunami Hazard Mitigation Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasyanos, Michael E.
Recently developed attenuation models are incorporated into standard one-dimensional (1-D) ground motion prediction equations (GMPEs), effectively making them two-dimensional (2-D) and eliminating the need to create different GMPEs for an increasing number of sub-regions. The model is tested against a data set of over 10,000 recordings from 81 earthquakes in North America. The use of attenuation models in GMPEs improves our ability to fit observed ground motions and should be incorporated into future national hazard maps. The improvement is most significant at higher frequencies and longer distances which have a greater number of wave cycles. This has implications for themore » rare high-magnitude earthquakes, which produce potentially damaging ground motions over wide areas, and drive the seismic hazards. Furthermore, the attenuation models can be created using weak ground motions, they could be developed for regions of low seismicity where empirical recordings of ground motions are uncommon and do not span the full range of magnitudes and distances.« less
Operational Forecasting and Warning systems for Coastal hazards in Korea
NASA Astrophysics Data System (ADS)
Park, Kwang-Soon; Kwon, Jae-Il; Kim, Jin-Ah; Heo, Ki-Young; Jun, Kicheon
2017-04-01
Coastal hazards caused by both Mother Nature and humans cost tremendous social, economic and environmental damages. To mitigate these damages many countries have been running the operational forecasting or warning systems. Since 2009 Korea Operational Oceanographic System (KOOS) has been developed by the leading of Korea Institute of Ocean Science and Technology (KIOST) in Korea and KOOS has been operated in 2012. KOOS is consists of several operational modules of numerical models and real-time observations and produces the basic forecasting variables such as winds, tides, waves, currents, temperature and salinity and so on. In practical application systems include storm surges, oil spills, and search and rescue prediction models. In particular, abnormal high waves (swell-like high-height waves) have occurred in the East coast of Korea peninsula during winter season owing to the local meteorological condition over the East Sea, causing property damages and the loss of human lives. In order to improve wave forecast accuracy even very local wave characteristics, numerical wave modeling system using SWAN is established with data assimilation module using 4D-EnKF and sensitivity test has been conducted. During the typhoon period for the prediction of sever waves and the decision making support system for evacuation of the ships, a high-resolution wave forecasting system has been established and calibrated.
Assessment of soil erosion risk in Komering watershed, South Sumatera, using SWAT model
NASA Astrophysics Data System (ADS)
Salsabilla, A.; Kusratmoko, E.
2017-07-01
Changes in land use watershed led to environmental degradation. Estimated loss of soil erosion is often difficult due to some factors such as topography, land use, climate and human activities. This study aims to predict soil erosion hazard and sediment yield using the Soil and Water Assessment Tools (SWAT) hydrological model. The SWAT was chosen because it can simulate the model with limited data. The study area is Komering watershed (806,001 Ha) in South Sumatera Province. There are two factors land management intervention: 1) land with agriculture, and 2) land with cultivation. These factors selected in accordance with the regulations of spatial plan area. Application of the SWAT demonstrated that the model can predict surface runoff, soil erosion loss and sediment yield. The erosion risk for each watershed can be classified and predicted its changes based on the scenarios which arranged. In this paper, we also discussed the relationship between the distribution of erosion risk and watershed's characteristics in a spatial perspective.
Wang, Qiang; Jia, Qingzhu; Yan, Lihong; Xia, Shuqian; Ma, Peisheng
2014-08-01
The aquatic toxicity value of hazardous contaminants plays an important role in the risk assessments of aquatic ecosystems. The following study presents a stable and accurate structure-toxicity relationship model based on the norm indexes for the prediction of toxicity value (log(LC50)) for 190 diverse narcotic pollutants (96 h LC50 data for Poecilia reticulata). Research indicates that this new model is very efficient and provides satisfactory results. The suggested prediction model is evidenced by R(2) (square correlation coefficient) and ARD (average relative difference) values of 0.9376 and 10.45%, respectively, for the training set, and 0.9264 and 13.90% for the testing set. Comparison results with reference models demonstrate that this new method, based on the norm indexes proposed in this work, results in significant improvements, both in accuracy and stability for predicting aquatic toxicity values of narcotic pollutants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wolfensberger, M
1992-01-01
One of the major short comings of the traditional TNM system is its limited potential for prognostication. With the development of multifactorial analysis techniques, such as Cox's proportional hazards model, it has become possible to simultaneously evaluate a large number of prognostic variables. Cox's model allows both the identification of prognostically relevant variables and the quantification of their prognostic influence. These characteristics make it a helpful tool for analysis as well as for prognostication. The goal of the present study was to develop a prognostic index for patients with carcinoma of the upper aero-digestive tract which makes use of all prognostically relevant variables. To accomplish this, the survival data of 800 patients with squamous cell carcinoma of the oral cavity, oropharynx, hypopharynx or larynx were analyzed. Sixty-one variables were screened for prognostic significance; of these only 19 variables (including age, tumor location, T, N and M stages, resection margins, capsular invasion of nodal metastases, and treatment modality) were found to significantly correlate with prognosis. With the help of Cox's equation, a prognostic index (PI) was computed for every combination of prognostic factors. To test the proposed model, the prognostic index was applied to 120 patients with carcinoma of the oral cavity or oropharynx. A comparison of predicted and observed survival showed good overall correlation, although actual survival tended to be better than predicted.
Identification of failure type in corroded pipelines: a bayesian probabilistic approach.
Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J
2010-07-15
Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Blauhut, Veit; Stahl, Kerstin; Stagge, James Howard; Tallaksen, Lena M.; De Stefano, Lucia; Vogt, Jürgen
2016-07-01
Drought is one of the most costly natural hazards in Europe. Due to its complexity, drought risk, meant as the combination of the natural hazard and societal vulnerability, is difficult to define and challenging to detect and predict, as the impacts of drought are very diverse, covering the breadth of socioeconomic and environmental systems. Pan-European maps of drought risk could inform the elaboration of guidelines and policies to address its documented severity and impact across borders. This work tests the capability of commonly applied drought indices and vulnerability factors to predict annual drought impact occurrence for different sectors and macro regions in Europe and combines information on past drought impacts, drought indices, and vulnerability factors into estimates of drought risk at the pan-European scale. This hybrid approach bridges the gap between traditional vulnerability assessment and probabilistic impact prediction in a statistical modelling framework. Multivariable logistic regression was applied to predict the likelihood of impact occurrence on an annual basis for particular impact categories and European macro regions. The results indicate sector- and macro-region-specific sensitivities of drought indices, with the Standardized Precipitation Evapotranspiration Index (SPEI) for a 12-month accumulation period as the overall best hazard predictor. Vulnerability factors have only limited ability to predict drought impacts as single predictors, with information about land use and water resources being the best vulnerability-based predictors. The application of the hybrid approach revealed strong regional and sector-specific differences in drought risk across Europe. The majority of the best predictor combinations rely on a combination of SPEI for shorter and longer accumulation periods, and a combination of information on land use and water resources. The added value of integrating regional vulnerability information with drought risk prediction could be proven. Thus, the study contributes to the overall understanding of drivers of drought impacts, appropriateness of drought indices selection for specific applications, and drought risk assessment.
Ruilope, Luis M; Zanchetti, Alberto; Julius, Stevo; McInnes, Gordon T; Segura, Julian; Stolt, Pelle; Hua, Tsushung A; Weber, Michael A; Jamerson, Ken
2007-07-01
Reduced renal function is predictive of poor cardiovascular outcomes but the predictive value of different measures of renal function is uncertain. We compared the value of estimated creatinine clearance, using the Cockcroft-Gault formula, with that of estimated glomerular filtration rate (GFR), using the Modification of Diet in Renal Disease (MDRD) formula, as predictors of cardiovascular outcome in 15 245 high-risk hypertensive participants in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial. For the primary end-point, the three secondary end-points and for all-cause death, outcomes were compared for individuals with baseline estimated creatinine clearance and estimated GFR < 60 ml/min and > or = 60 ml/min using hazard ratios and 95% confidence intervals. Coronary heart disease, left ventricular hypertrophy, age, sex and treatment effects were included as covariates in the model. For each end-point considered, the risk in individuals with poor renal function at baseline was greater than in those with better renal function. Estimated creatinine clearance (Cockcroft-Gault) was significantly predictive only of all-cause death [hazard ratio = 1.223, 95% confidence interval (CI) = 1.076-1.390; P = 0.0021] whereas estimated GFR was predictive of all outcomes except stroke. Hazard ratios (95% CIs) for estimated GFR were: primary cardiac end-point, 1.497 (1.332-1.682), P < 0.0001; myocardial infarction, 1.501 (1.254-1.796), P < 0.0001; congestive heart failure, 1.699 (1.435-2.013), P < 0.0001; stroke, 1.152 (0.952-1.394) P = 0.1452; and all-cause death, 1.231 (1.098-1.380), P = 0.0004. These results indicate that estimated glomerular filtration rate calculated with the MDRD formula is more informative than estimated creatinine clearance (Cockcroft-Gault) in the prediction of cardiovascular outcomes.
Assessing the ability of operational snow models to predict snowmelt runoff extremes (Invited)
NASA Astrophysics Data System (ADS)
Wood, A. W.; Restrepo, P. J.; Clark, M. P.
2013-12-01
In the western US, the snow accumulation and melt cycle of winter and spring plays a critical role in the region's water management strategies. Consequently, the ability to predict snowmelt runoff at time scales from days to seasons is a key input for decisions in reservoir management, whether for avoiding flood hazards or supporting environmental flows through the scheduling of releases in spring, or for allocating releases for multi-state water distribution in dry seasons of year (using reservoir systems to provide an invaluable buffer for many sectors against drought). Runoff forecasts thus have important benefits at both wet and dry extremes of the climatological spectrum. The importance of the prediction of the snow cycle motivates an assessment of the strengths and weaknesses of the US's central operational snow model, SNOW17, in contrast to process-modeling alternatives, as they relate to simulating observed snowmelt variability and extremes. To this end, we use a flexible modeling approach that enables an investigation of different choices in model structure, including model physics, parameterization and degree of spatiotemporal discretization. We draw from examples of recent extreme events in western US watersheds and an overall assessment of retrospective model performance to identify fruitful avenues for advancing the modeling basis for the operational prediction of snow-related runoff extremes.
Nomogram Prediction of Overall Survival After Curative Irradiation for Uterine Cervical Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, YoungSeok; Yoo, Seong Yul; Kim, Mi-Sook
Purpose: The purpose of this study was to develop a nomogram capable of predicting the probability of 5-year survival after radical radiotherapy (RT) without chemotherapy for uterine cervical cancer. Methods and Materials: We retrospectively analyzed 549 patients that underwent radical RT for uterine cervical cancer between March 1994 and April 2002 at our institution. Multivariate analysis using Cox proportional hazards regression was performed and this Cox model was used as the basis for the devised nomogram. The model was internally validated for discrimination and calibration by bootstrap resampling. Results: By multivariate regression analysis, the model showed that age, hemoglobin levelmore » before RT, Federation Internationale de Gynecologie Obstetrique (FIGO) stage, maximal tumor diameter, lymph node status, and RT dose at Point A significantly predicted overall survival. The survival prediction model demonstrated good calibration and discrimination. The bootstrap-corrected concordance index was 0.67. The predictive ability of the nomogram proved to be superior to FIGO stage (p = 0.01). Conclusions: The devised nomogram offers a significantly better level of discrimination than the FIGO staging system. In particular, it improves predictions of survival probability and could be useful for counseling patients, choosing treatment modalities and schedules, and designing clinical trials. However, before this nomogram is used clinically, it should be externally validated.« less
Spector, Paul E.
2016-01-01
Background Safety climate, violence prevention climate, and civility climate were independently developed and linked to domain-specific workplace hazards, although all three were designed to promote the physical and psychological safety of workers. Purpose To test domain specificity between conceptually related workplace climates and relevant workplace hazards. Methods Data were collected from 368 persons employed in various industries and descriptive statistics were calculated for all study variables. Correlational and relative weights analyses were used to test for domain specificity. Results The three climate domains were similarly predictive of most workplace hazards, regardless of domain specificity. Discussion This study suggests that the three climate domains share a common higher order construct that may predict relevant workplace hazards better than any of the scales alone. PMID:27110930
Risk management and precaution: insights on the cautious use of evidence.
Hrudey, Steve E; Leiss, William
2003-01-01
Risk management, done well, should be inherently precautionary. Adopting an appropriate degree of precaution with respect to feared health and environmental hazards is fundamental to risk management. The real problem is in deciding how precautionary to be in the face of inevitable uncertainties, demanding that we understand the equally inevitable false positives and false negatives from screening evidence. We consider a framework for detection and judgment of evidence of well-characterized hazards, using the concepts of sensitivity, specificity, positive predictive value, and negative predictive value that are well established for medical diagnosis. Our confidence in predicting the likelihood of a true danger inevitably will be poor for rare hazards because of the predominance of false positives; failing to detect a true danger is less likely because false negatives must be rarer than the danger itself. Because most controversial environmental hazards arise infrequently, this truth poses a dilemma for risk management. PMID:14527835
Laharz_py: GIS tools for automated mapping of lahar inundation hazard zones
Schilling, Steve P.
2014-01-01
Laharz_py is written in the Python programming language as a suite of tools for use in ArcMap Geographic Information System (GIS). Primarily, Laharz_py is a computational model that uses statistical descriptions of areas inundated by past mass-flow events to forecast areas likely to be inundated by hypothetical future events. The forecasts use physically motivated and statistically calibrated power-law equations that each has a form A = cV2/3, relating mass-flow volume (V) to planimetric or cross-sectional areas (A) inundated by an average flow as it descends a given drainage. Calibration of the equations utilizes logarithmic transformation and linear regression to determine the best-fit values of c. The software uses values of V, an algorithm for idenitifying mass-flow source locations, and digital elevation models of topography to portray forecast hazard zones for lahars, debris flows, or rock avalanches on maps. Laharz_py offers two methods to construct areas of potential inundation for lahars: (1) Selection of a range of plausible V values results in a set of nested hazard zones showing areas likely to be inundated by a range of hypothetical flows; and (2) The user selects a single volume and a confidence interval for the prediction. In either case, Laharz_py calculates the mean expected A and B value from each user-selected value of V. However, for the second case, a single value of V yields two additional results representing the upper and lower values of the confidence interval of prediction. Calculation of these two bounding predictions require the statistically calibrated prediction equations, a user-specified level of confidence, and t-distribution statistics to calculate the standard error of regression, standard error of the mean, and standard error of prediction. The portrayal of results from these two methods on maps compares the range of inundation areas due to prediction uncertainties with uncertainties in selection of V values. The Open-File Report document contains an explanation of how to install and use the software. The Laharz_py software includes an example data set for Mount Rainier, Washington. The second part of the documentation describes how to use all of the Laharz_py tools in an example dataset at Mount Rainier, Washington.
Arai, Yasumichi; Martin-Ruiz, Carmen M; Takayama, Michiyo; Abe, Yukiko; Takebayashi, Toru; Koyasu, Shigeo; Suematsu, Makoto; Hirose, Nobuyoshi; von Zglinicki, Thomas
2015-10-01
To determine the most important drivers of successful ageing at extreme old age, we combined community-based prospective cohorts: Tokyo Oldest Old Survey on Total Health (TOOTH), Tokyo Centenarians Study (TCS) and Japanese Semi-Supercentenarians Study (JSS) comprising 1554 individuals including 684 centenarians and (semi-)supercentenarians, 167 pairs of centenarian offspring and spouses, and 536 community-living very old (85 to 99 years). We combined z scores from multiple biomarkers to describe haematopoiesis, inflammation, lipid and glucose metabolism, liver function, renal function, and cellular senescence domains. In Cox proportional hazard models, inflammation predicted all-cause mortality with hazard ratios (95% CI) 1.89 (1.21 to 2.95) and 1.36 (1.05 to 1.78) in the very old and (semi-)supercentenarians, respectively. In linear forward stepwise models, inflammation predicted capability (10.8% variance explained) and cognition (8(.)6% variance explained) in (semi-)supercentenarians better than chronologic age or gender. The inflammation score was also lower in centenarian offspring compared to age-matched controls with Δ (95% CI) = - 0.795 (- 1.436 to - 0.154). Centenarians and their offspring were able to maintain long telomeres, but telomere length was not a predictor of successful ageing in centenarians and semi-supercentenarians. We conclude that inflammation is an important malleable driver of ageing up to extreme old age in humans.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet
2010-05-01
This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross application model yields reasonable results which can be used for preliminary landslide hazard mapping.
Roy, Andrew K; McCullagh, Brian N; Segurado, Ricardo; McGorrian, Catherine; Keane, Elizabeth; Keaney, John; Fitzgibbon, Maria N; Mahon, Niall G; Murray, Patrick T; Gaine, Sean P
2014-01-01
The detection of elevations in cardiorenal biomarkers, such as troponins, B-type natriuretic peptides (BNPs), and neutrophil gelatinase-associated lipocalins, are associated with poor outcomes in patients hospitalized with acute heart failure. Less is known about the association of these markers with adverse events in chronic right ventricular dysfunction due to pulmonary hypertension, or whether their measurement may improve risk assessment in the outpatient setting. We performed a cohort study of 108 patients attending the National Pulmonary Hypertension Unit in Dublin, Ireland, from 2007 to 2009. Cox proportional hazards analysis and receiver operating characteristic curves were used to determine predictors of mortality and hospitalization. Death or hospitalization occurred in 50 patients (46.3%) during the median study period of 4.1 years. Independent predictors of mortality were: 1) decreasing 6-minute walk test (6MWT; hazard ratio [HR] 12.8; P < .001); 2) BNP (HR 6.68; P < .001); and 3) highly sensitive troponin (hsTnT; HR 5.48; P < .001). Adjusted hazard analyses remained significant when hsTnT was added to a model with BNP and 6MWT (HR 9.26, 95% CI 3.61-23.79), as did the predictive ability of the model for death and rehospitalization (area under the receiver operating characteristic curve 0.81, 95% CI 0.73-0.90). Detection of troponin using a highly sensitive assay identifies a pulmonary hypertension subgroup with a poorer prognosis. hsTnT may also be used in a risk prediction model to identify patients at higher risk who may require escalation of targeted pulmonary vasodilator therapies and closer clinical surveillance. Copyright © 2014 Elsevier Inc. All rights reserved.
Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele
2016-11-01
In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883
Method for detecting and avoiding flight hazards
NASA Astrophysics Data System (ADS)
von Viebahn, Harro; Schiefele, Jens
1997-06-01
Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.
Otgonsuren, Munkhzul; Estep, Michael J; Hossain, Nayeem; Younossi, Elena; Frost, Spencer; Henry, Linda; Hunt, Sharon; Fang, Yun; Goodman, Zachary; Younossi, Zobair M
2014-12-01
Non-alcoholic steatohepatitis (NASH) is the progressive form of non-alcoholic fatty liver disease (NAFLD). A liver biopsy is considered the "gold standard" for diagnosing/staging NASH. Identification of NAFLD/NASH using non-invasive tools is important for intervention. The study aims were to: develop/validate the predictive performance of a non-invasive model (index of NASH [ION]); assess the performance of a recognized non-invasive model (fatty liver index [FLI]) compared with ION for NAFLD diagnosis; determine which non-invasive model (FLI, ION, or NAFLD fibrosis score [NFS]) performed best in predicting age-adjusted mortality. From the National Health and Nutrition Examination Survey III database, anthropometric, clinical, ultrasound, laboratory, and mortality data were obtained (n = 4458; n = 861 [19.3%] NAFLD by ultrasound) and used to develop the ION model, and then to compare the ION and FLI models for NAFLD diagnosis. For validation and diagnosis of NASH, liver biopsy data were used (n = 152). Age-adjusted Cox proportional hazard modeling estimated the association among the three non-invasive tests (FLI, ION, and NFS) and mortality. FLI's threshold score > 60 and ION's threshold score > 22 had similar specificity (FLI = 80% vs ION = 82%) for NAFLD diagnosis; FLI < 30 (80% sensitivity) and ION < 11 (81% sensitivity) excluded NAFLD. An ION score > 50 predicted histological NASH (92% specificity); the FLI model did not predict NASH or mortality. The ION model was best in predicting cardiovascular/diabetes-related mortality; NFS predicted overall or diabetes-related mortality. The ION model was superior in predicting NASH and mortality compared with the FLI model. Studies are needed to validate ION. © 2014 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.
De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis
2017-06-01
Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.