Science.gov

Sample records for accident prediction model

  1. Estimating vehicle roadside encroachment frequency using accident prediction models

    SciTech Connect

    Miaou, S.-P.

    1996-07-01

    The existing data to support the development of roadside encroachment- based accident models are extremely limited and largely outdated. Under the sponsorship of the Federal Highway Administration and Transportation Research Board, several roadside safety projects have attempted to address this issue by providing rather comprehensive data collection plans and conducting pilot data collection efforts. It is clear from the results of these studies that the required field data collection efforts will be expensive. Furthermore, the validity of any field collected encroachment data may be questionable because of the technical difficulty to distinguish intentional from unintentional encroachments. This paper proposes an alternative method for estimating the basic roadside encroachment data without actually field collecting them. The method is developed by exploring the probabilistic relationships between a roadside encroachment event and a run-off-the-road event With some mild assumptions, the method is capable of providing a wide range of basic encroachment data from conventional accident prediction models. To illustrate the concept and use of such a method, some basic encroachment data are estimated for rural two-lane undivided roads. In addition, the estimated encroachment data are compared with the existing collected data. The illustration shows that the method described in this paper can be a viable approach to estimating basic encroachment data without actually collecting them which can be very costly.

  2. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. PMID:26922288

  3. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  4. Model predictions of wind and turbulence profiles associated with an ensemble of aircraft accidents

    NASA Technical Reports Server (NTRS)

    Williamson, G. G.; Lewellen, W. S.; Teske, M. E.

    1977-01-01

    The feasibility of predicting conditions under which wind/turbulence environments hazardous to aviation operations exist is studied by examining a number of different accidents in detail. A model of turbulent flow in the atmospheric boundary layer is used to reconstruct wind and turbulence profiles which may have existed at low altitudes at the time of the accidents. The predictions are consistent with available flight recorder data, but neither the input boundary conditions nor the flight recorder observations are sufficiently precise for these studies to be interpreted as verification tests of the model predictions.

  5. Traffic safety assessment and development of predictive models for accidents on rural roads in Egypt.

    PubMed

    Abbas, Khaled A

    2004-03-01

    This paper starts by presenting a conceptualization of indicators, criteria and accidents' causes that can be used to describe traffic safety. The paper provides an assessment of traffic safety conditions for rural roads in Egypt. This is done through a three-step procedure. First, deaths per million vehicle kilometers are obtained and compared for Egypt, three other Arab countries and six of the G-7 countries. Egypt stands as having a significantly high rate of deaths per 100 million vehicle kilometers. This is followed by compiling available traffic and accident data for five main rural roads in Egypt over a 10-year period (1990-1999). These are used to compute and compare 13 traffic safety indicators for these roads. The third step for assessing traffic safety for rural roads in Egypt is concerned with presenting a detailed analysis of accident causes. The paper moves on to develop a number of statistical models that can be used in the prediction of the expected number of accidents, injuries, fatalities and casualties on the rural roads in Egypt. Time series data of traffic and accidents, over a 10 years period for the considered roads, is utilized in the calibration of these predictive models. Several functional forms are explored and tested in the calibration process. Before proceeding to the development of these models three ANOVA statistical tests are conducted to establish whether there are any significant differences in the data used for models' calibration as a result of differences among the considered five roads.

  6. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    PubMed

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  7. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    PubMed Central

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  8. Application of Gray Markov SCGM(1,1)c Model to Prediction of Accidents Deaths in Coal Mining

    PubMed Central

    Lan, Jian-yi; Zhou, Ying

    2014-01-01

    The prediction of mine accident is the basis of aviation safety assessment and decision making. Gray prediction is suitable for such kinds of system objects with few data, short time, and little fluctuation, and Markov chain theory is just suitable for forecasting stochastic fluctuating dynamic process. Analyzing the coal mine accident human error cause, combining the advantages of both Gray prediction and Markov theory, an amended Gray Markov SCGM(1,1)c model is proposed. The gray SCGM(1,1)c model is applied to imitate the development tendency of the mine safety accident, and adopt the amended model to improve prediction accuracy, while Markov prediction is used to predict the fluctuation along the tendency. Finally, the new model is applied to forecast the mine safety accident deaths from 1990 to 2010 in China, and, 2011–2014 coal accidents deaths were predicted. The results show that the new model not only discovers the trend of the mine human error accident death toll but also overcomes the random fluctuation of data affecting precision. It possesses stronger engineering application. PMID:27419203

  9. Compartment model for long-term contamination prediction in deciduous fruit trees after a nuclear accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A. )

    1990-06-01

    Radiocesium contamination from the Chernobyl accident of different parts (fruits, leaves, and shoots) of selected apricot trees in North Greece was systematically measured in 1987 and 1988. The results are presented and discussed in the framework of a simple compartment model describing the long-term contamination uptake mechanism of deciduous fruit trees after a nuclear accident.

  10. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations.

    PubMed

    Lin, Lei; Wang, Qian; Sadek, Adel W

    2016-06-01

    The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean

  11. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    PubMed Central

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  12. Review of models applicable to accident aerosols

    SciTech Connect

    Glissmeyer, J.A.

    1983-07-01

    Estimations of potential airborne-particle releases are essential in safety assessments of nuclear-fuel facilities. This report is a review of aerosol behavior models that have potential applications for predicting aerosol characteristics in compartments containing accident-generated aerosol sources. Such characterization of the accident-generated aerosols is a necessary step toward estimating their eventual release in any accident scenario. Existing aerosol models can predict the size distribution, concentration, and composition of aerosols as they are acted on by ventilation, diffusion, gravity, coagulation, and other phenomena. Models developed in the fields of fluid mechanics, indoor air pollution, and nuclear-reactor accidents are reviewed with this nuclear fuel facility application in mind. The various capabilities of modeling aerosol behavior are tabulated and discussed, and recommendations are made for applying the models to problems of differing complexity.

  13. Predictions of structural integrity of steam generator tubes under normal operating, accident, and severe accident conditions

    SciTech Connect

    Majumdar, S.

    1996-09-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation is confirmed by further tests at high temperatures as well as by finite element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation is confirmed by finite element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure is developed and validated by tests under varying temperature and pressure loading expected during severe accidents.

  14. Do Cognitive Models Help in Predicting the Severity of Posttraumatic Stress Disorder, Phobia, and Depression after Motor Vehicle Accidents? A Prospective Longitudinal Study

    ERIC Educational Resources Information Center

    Ehring, Thomas; Ehlers, Anke; Glucksman, Edward

    2008-01-01

    The study investigated the power of theoretically derived cognitive variables to predict posttraumatic stress disorder (PTSD), travel phobia, and depression following injury in a motor vehicle accident (MVA). MVA survivors (N = 147) were assessed at the emergency department on the day of their accident and 2 weeks, 1 month, 3 months, and 6 months…

  15. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    SciTech Connect

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmed by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.

  16. Assessing causality in multivariate accident models.

    PubMed

    Elvik, Rune

    2011-01-01

    This paper discusses the application of operational criteria of causality to multivariate statistical models developed to identify sources of systematic variation in accident counts, in particular the effects of variables representing safety treatments. Nine criteria of causality serving as the basis for the discussion have been developed. The criteria resemble criteria that have been widely used in epidemiology. To assess whether the coefficients estimated in a multivariate accident prediction model represent causal relationships or are non-causal statistical associations, all criteria of causality are relevant, but the most important criterion is how well a model controls for potentially confounding factors. Examples are given to show how the criteria of causality can be applied to multivariate accident prediction models in order to assess the relationships included in these models. It will often be the case that some of the relationships included in a model can reasonably be treated as causal, whereas for others such an interpretation is less supported. The criteria of causality are indicative only and cannot provide a basis for stringent logical proof of causality.

  17. Prediction of police officers' traffic accident involvement using behavioral observations.

    PubMed

    Gully, S M; Whitney, D J; Vanosdall, F E

    1995-06-01

    The current study used scores on the Driver Performance Measurement (DPM) test and data gathered over four years on accident type and frequency from 47 police officers to provide evidence that cognitive-behavioral observations of driving patterns can lead to predictions of subsequent accident involvement. Results indicate that after controlling for age and experience, scores on the DPM test predicted involvement in preventable accidents but not unpreventable accidents. Implications for future research involving the observation of cognitive-behavioral sequences are discussed. PMID:7639919

  18. Development of a model to predict flow oscillations in low-flow sodium boiling. [Loss-of-Piping Integrity accidents

    SciTech Connect

    Levin, A.E.; Griffith, P.

    1980-04-01

    Tests performed in a small scale water loop showed that voiding oscillations, similar to those observed in sodium, were present in water, as well. An analytical model, appropriate for either sodium or water, was developed and used to describe the water flow behavior. The experimental results indicate that water can be successfully employed as a sodium simulant, and further, that the condensation heat transfer coefficient varies significantly during the growth and collapse of vapor slugs during oscillations. It is this variation, combined with the temperature profile of the unheated zone above the heat source, which determines the oscillatory behavior of the system. The analytical program has produced a model which qualitatively does a good job in predicting the flow behavior in the wake experiment. The amplitude discrepancies are attributable to experimental uncertainties and model inadequacies. Several parameters (heat transfer coefficient, unheated zone temperature profile, mixing between hot and cold fluids during oscillations) are set by the user. Criteria for the comparison of water and sodium experiments have been developed.

  19. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    SciTech Connect

    Rest, J.; Zawadzki, S.A. )

    1992-09-01

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  20. [Chest modelling and automotive accidents].

    PubMed

    Trosseille, Xavier

    2011-11-01

    Automobile development is increasingly based on mathematical modeling. Accurate models of the human body are now available and serve to develop new means of protection. These models used to consist of rigid, articulated bodies but are now made of several million finite elements. They are now capable of predicting some risks of injury. To develop these models, sophisticated tests were conducted on human cadavers. For example, chest modeling started with material characterization and led to complete validation in the automobile environment. Model personalization, based on medical imaging, will permit studies of the behavior and tolerances of the entire population.

  1. An exploration of the utility of mathematical modeling predicting fatigue from sleep/wake history and circadian phase applied in accident analysis and prevention: the crash of Comair Flight 5191.

    PubMed

    Pruchnicki, Shawn A; Wu, Lora J; Belenky, Gregory

    2011-05-01

    On 27 August 2006 at 0606 eastern daylight time (EDT) at Bluegrass Airport in Lexington, KY (LEX), the flight crew of Comair Flight 5191 inadvertently attempted to take off from a general aviation runway too short for their aircraft. The aircraft crashed killing 49 of the 50 people on board. To better understand this accident and to aid in preventing similar accidents, we applied mathematical modeling predicting fatigue-related degradation in performance for the Air Traffic Controller on-duty at the time of the crash. To provide the necessary input to the model, we attempted to estimate circadian phase and sleep/wake histories for the Captain, First Officer, and Air Traffic Controller. We were able to estimate with confidence the circadian phase for each. We were able to estimate with confidence the sleep/wake history for the Air Traffic Controller, but unable to do this for the Captain and First Officer. Using the sleep/wake history estimates for the Air Traffic Controller as input, the mathematical modeling predicted moderate fatigue-related performance degradation at the time of the crash. This prediction was supported by the presence of what appeared to be fatigue-related behaviors in the Air Traffic Controller during the 30 min prior to and in the minutes after the crash. Our modeling results do not definitively establish fatigue in the Air Traffic Controller as a cause of the accident, rather they suggest that had he been less fatigued he might have detected Comair Flight 5191's lining up on the wrong runway. We were not able to perform a similar analysis for the Captain and First Officer because we were not able to estimate with confidence their sleep/wake histories. Our estimates of sleep/wake history and circadian rhythm phase for the Air Traffic Controller might generalize to other air traffic controllers and to flight crew operating in the early morning hours at LEX. Relative to other times of day, the modeling results suggest an elevated risk of fatigue

  2. PREDICTIVE MODELS

    SciTech Connect

    Ray, R.M. )

    1986-12-01

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2) carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3) in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4) polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5) steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  3. Nuclear Facilities Fire Accident Model

    1999-09-01

    4. NATURE OF PROBLEM SOLVED FIRAC predicts fire-induced flows, thermal and material transport, and radioactive and nonradioactive source terms in a ventilation system. It is designed to predict the radioactive and nonradioactive source terms that lead to gas dynamic, material transport, and heat transfer transients. FIRAC's capabilities are directed toward nuclear fuel cycle facilities and the primary release pathway, the ventilation system. However, it is applicable to other facilities and can be used to modelmore » other airflow pathways within a structure. The basic material transport capability of FIRAC includes estimates of entrainment, convection, deposition, and filtration of material. The interrelated effects of filter plugging, heat transfer, and gas dynamics are also simulated. A ventilation system model includes elements such as filters, dampers, ducts, and blowers connected at nodal points to form networks. A zone-type compartment fire model is incorporated to simulate fire-induced transients within a facility. 5. METHOD OF SOLUTION FIRAC solves one-dimensional, lumped-parameter, compressible flow equations by an implicit numerical scheme. The lumped-parameter method is the basic formulation that describes the gas dynamics system. No spatial distribution of parameters is considered in this approach, but an effect of spatial distribution can be approximated by noding. Network theory, using the lumped parameter method, includes a number of system elements, called branches, joined at certain points, called nodes. Ventilation system components that exhibit flow resistance and inertia, such as dampers, ducts, valves, and filters, and those that exhibit flow potential, such as blowers, are located within the branches of the system. The connection points of branches are nodes for components that have finite volumes, such as rooms, gloveboxes, and plenums, and for boundaries where the volume is practically infinite. All internal nodes, therefore, possess some

  4. Modeling accident frequencies as zero-altered probability processes: an empirical inquiry.

    PubMed

    Shankar, V; Milton, J; Mannering, F

    1997-11-01

    This paper presents an empirical inquiry into the applicability of zero-altered counting processes to roadway section accident frequencies. The intent of such a counting process is to distinguish sections of roadway that are truly safe (near zero-accident likelihood) from those that are unsafe but happen to have zero accidents observed during the period of observation (e.g. one year). Traditional applications of Poisson and negative binomial accident frequency models do not account for this distinction and thus can produce biased coefficient estimates because of the preponderance of zero-accident observations. Zero-altered probability processes such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) distributions are examined and proposed for accident frequencies by roadway functional class and geographic location. The findings show that the ZIP structure models are promising and have great flexibility in uncovering processes affecting accident frequencies on roadway sections observed with zero accidents and those with observed accident occurrences. This flexibility allows highway engineers to better isolate design factors that contribute to accident occurrence and also provides additional insight into variables that determine the relative accident likelihoods of safe versus unsafe roadways. The generic nature of the models and the relatively good power of the Vuong specification test used in the non-nested hypotheses of model specifications offers roadway designers the potential to develop a global family of models for accident frequency prediction that can be embedded in a larger safety management system. PMID:9370019

  5. Investigating accident causation through information network modelling.

    PubMed

    Griffin, T G C; Young, M S; Stanton, N A

    2010-02-01

    Management of risk in complex domains such as aviation relies heavily on post-event investigations, requiring complex approaches to fully understand the integration of multi-causal, multi-agent and multi-linear accident sequences. The Event Analysis of Systemic Teamwork methodology (EAST; Stanton et al. 2008) offers such an approach based on network models. In this paper, we apply EAST to a well-known aviation accident case study, highlighting communication between agents as a central theme and investigating the potential for finding agents who were key to the accident. Ultimately, this work aims to develop a new model based on distributed situation awareness (DSA) to demonstrate that the risk inherent in a complex system is dependent on the information flowing within it. By identifying key agents and information elements, we can propose proactive design strategies to optimize the flow of information and help work towards avoiding aviation accidents. Statement of Relevance: This paper introduces a novel application of an holistic methodology for understanding aviation accidents. Furthermore, it introduces an ongoing project developing a nonlinear and prospective method that centralises distributed situation awareness and communication as themes. The relevance of findings are discussed in the context of current ergonomic and aviation issues of design, training and human-system interaction. PMID:20099174

  6. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.; Hall, R.; Martinez, G.; Uryasev, S.

    1996-03-13

    This work explores the hypothesis that Service Difficulty Reports (SDR - primarily inspection reports) are related to Accident Incident Data System (AIDS - reports primarily compiled from National Transportation Safety Board (NTSB) accident investigations). This work sought and found relations between equipment operability reported in the SDR and aviation safety reported in AIDS. Equipment is not the only factor in aviation accidents, but it is the factor reported in the SDR. Two approaches to risk analysis were used: (1) The conventional method, in which reporting frequencies are taken from a data base (SDR), and used with an aircraft reliability block diagram model of the critical systems to predict aircraft failure, and (2) Shape analysis that uses the magnitude and shape of the SDR distribution compared with the AIDS distribution to predict aircraft failure.

  7. Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015

    PubMed Central

    Mehmandar, Mohammadreza; Soori, Hamid; Mehrabi, Yadolah

    2016-01-01

    Background: Predicting the trend in traffic accidents deaths and its analysis can be a useful tool for planning and policy-making, conducting interventions appropriate with death trend, and taking the necessary actions required for controlling and preventing future occurrences. Objective: Predicting and analyzing the trend of traffic accidents deaths in Iran in 2014 and 2015. Settings and Design: It was a cross-sectional study. Materials and Methods: All the information related to fatal traffic accidents available in the database of Iran Legal Medicine Organization from 2004 to the end of 2013 were used to determine the change points (multi-variable time series analysis). Using autoregressive integrated moving average (ARIMA) model, traffic accidents death rates were predicted for 2014 and 2015, and a comparison was made between this rate and the predicted value in order to determine the efficiency of the model. Results: From the results, the actual death rate in 2014 was almost similar to that recorded for this year, while in 2015 there was a decrease compared with the previous year (2014) for all the months. A maximum value of 41% was also predicted for the months of January and February, 2015. Conclusion: From the prediction and analysis of the death trends, proper application and continuous use of the intervention conducted in the previous years for road safety improvement, motor vehicle safety improvement, particularly training and culture-fostering interventions, as well as approval and execution of deterrent regulations for changing the organizational behaviors, can significantly decrease the loss caused by traffic accidents. PMID:27308255

  8. A simplified approach for predicting radionuclide releases from light water reactor accidents

    SciTech Connect

    Nourbakhsh, H.P.; Cazzoli, E.G.

    1990-01-01

    This paper describes a personal computer-based program (GENSOR) that utilizes a simplified time-dependent approach for predicting the radionuclide releases during postulated reactor accidents. This interactive computer program allows the user to generate simplified source terms based on those severe accident attributes that most influence radionuclide release. The parameters entering this simplified model were derived from existing source term data. These data consists mostly of source term code package (STCP) calculations performed in support of Draft NUREG-1150. An illustrative application of the methodology is presented in this paper. 7 refs., 3 figs.

  9. Predicted spatio-temporal dynamics of radiocesium deposited onto forests following the Fukushima nuclear accident

    PubMed Central

    Hashimoto, Shoji; Matsuura, Toshiya; Nanko, Kazuki; Linkov, Igor; Shaw, George; Kaneko, Shinji

    2013-01-01

    The majority of the area contaminated by the Fukushima Dai-ichi nuclear power plant accident is covered by forest. To facilitate effective countermeasure strategies to mitigate forest contamination, we simulated the spatio-temporal dynamics of radiocesium deposited into Japanese forest ecosystems in 2011 using a model that was developed after the Chernobyl accident in 1986. The simulation revealed that the radiocesium inventories in tree and soil surface organic layer components drop rapidly during the first two years after the fallout. Over a period of one to two years, the radiocesium is predicted to move from the tree and surface organic soil to the mineral soil, which eventually becomes the largest radiocesium reservoir within forest ecosystems. Although the uncertainty of our simulations should be considered, the results provide a basis for understanding and anticipating the future dynamics of radiocesium in Japanese forests following the Fukushima accident. PMID:23995073

  10. Investigation of Key Factors for Accident Severity at Railroad Grade Crossings by Using a Logit Model

    PubMed Central

    Hu, Shou-Ren; Li, Chin-Shang; Lee, Chi-Kang

    2009-01-01

    Although several studies have used logit or probit models and their variants to fit data of accident severity on roadway segments, few have investigated accident severity at a railroad grade crossing (RGC). Compared to accident risk analysis in terms of accident frequency and severity of a highway system, investigation of the factors contributing to traffic accidents at an RGC may be more complicated because of additional highway–railway interactions. Because the proportional odds assumption was violated while fitting cumulative logit modeled by the proportional odds models with stepwise variable selection to ordinal accident severity data collected at 592 RGCs in Taiwan, as suggested by Strokes et al. (2000, p. 249) a generalized logit model with stepwise variable selection was used instead to identify explanatory variables (factors or covariates) that were significantly associated with the severity of collisions. Hence, the fitted model was used to predict the level of accident severity, given a set of values in the explanatory variables. Number of daily trains, highway separation, number of daily trucks, obstacle detection device, and approaching crossing markings significantly affected levels of accident severity at an RGC (p-value = 0.0009, 0.0008, 0.0112, 0.0017, and 0.0003, respectively). Finally, marginal effect analysis on the number of daily trains and law enforcement camera was conducted to evaluate the effect of the number of daily trains and presence of a law enforcement camera on the potential accident severity. PMID:20161414

  11. Characterizing the Severe Turbulence Environments Associated With Commercial Aviation Accidents: A Real-Time Turbulence Model (RTTM) Designed for the Operational Prediction of Hazardous Aviation Turbulence Environments

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.

    2004-01-01

    Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.

  12. Relating aviation service difficulty reports to accident data for safety trend prediction

    SciTech Connect

    Fullwood, R.R.; Hall, R.E.; Martinez-Guridi, G.; Uryasev, S.; Sampath, S.G.

    1996-10-01

    A synthetic model of scheduled-commercial U.S. aviation fatalities was constructed from linear combinations of the time-spectra of critical systems reporting using 5.5 years of Service Difficulty Reports (SDR){sup 2} and Accident Incident Data System (AIDS) records{sup 3}. This model, used to predict near-future trends in aviation accidents, was tested by using the first 36 months of data to construct the synthetic model which was used to predict fatalities during the following eight months. These predictions were tested by comparison with the fatality data. A reliability block diagram (RBD) and third-order extrapolations also were used as predictive models and compared with actuality. The synthetic model was the best predictor because of its use of systems data. Other results of the study are a database of service difficulties for major aviation systems, and a rank ordering of systems according to their contribution to the synthesis. 4 refs., 8 figs., 3 tabs.

  13. An idealized transient model for melt dispersal from reactor cavities during pressurized melt ejection accident scenarios

    SciTech Connect

    Tutu, N.K.

    1991-06-01

    The direct Containment Heating (DCH) calculations require that the transient rate at which the melt is ejected from the reactor cavity during hypothetical pressurized melt ejection accident scenarios be calculated. However, at present no models, that are able to predict the available melt dispersal data from small scale reactor cavity models, are available. In this report, a simple idealized model of the melt dispersal process within a reactor cavity during a pressurized melt ejection accident scenario is presented. The predictions from the model agree reasonably well with the integral data obtained from the melt dispersal experiments using a small scale model of the Surry reactor cavity. 17 refs., 15 figs.

  14. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Behavioral accidents are a particular type of accident. They are caused by inappropriate individual behaviors and faulty reactions. Catastrophe theory is a means for mathematically modeling the dynamic processes that underlie behavioral accidents. Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three imnportant uses: it can be used as a effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  15. Weather and Dispersion Modeling of the Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Dunn, Thomas; Businger, Steven

    2014-05-01

    The surface deposition of radioactive material from the accident at the Fukushima Daiichi nuclear power station was investigated for 11 March to 17 March 2011. A coupled weather and dispersion modeling system was developed and simulations of the accident performed using two independent source terms that differed in emission rate and height and in the total amount of radioactive material released. Observations in Japan during the first week of the accident revealed a natural grouping between periods of dry (12-14 March) and wet (15-17 March) weather. The distinct weather regimes served as convenient validation periods for the model predictions. Results show significant differences in the distribution of cumulative surface deposition of 137Cs due to wet and dry removal processes. A comparison of 137Cs deposition predicted by the model with aircraft observations of surface-deposited gamma radiation showed reasonable agreement in surface contamination patterns during the dry phase of the accident for both source terms. It is suggested that this agreement is because of the weather model's ability to simulate the extent and timing of onshore flow associated with a sea breeze circulation that developed around the time of the first reactor explosion. During the wet phase of the accident the pattern is not as well predicted. It is suggested that this discrepancy is because of differences between model predicted and observed precipitation distributions.

  16. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere.

  17. The health belief model and use of accident and emergency services by the general public.

    PubMed

    Walsh, M

    1995-10-01

    There has been much debate about the use made by the general public of accident and emergency services. A strong element of professional disapproval has been present, as shown by phrases such as 'inappropriate attender'. This paper examines the reasons why people attend accident and emergency and the factors that delay or accelerate attendance, utilizing a framework espoused in the medical sociology literature, i.e. the Health Belief Model. This predicts that individuals carry out a treatment cost-benefit analysis when making decisions about seeking medical assistance. A sample of 200 adult, ambulatory accident and emergency patients was interviewed whilst waiting to see the casualty officer for this study. The data demonstrated that much of the medical, sociological literature concerning patient consultation with doctors is applicable to the accident and emergency situation, in particular the Health Belief Model. A range of factors was shown to make statistically significant differences to the delay times involved in deciding to attend accident and emergency and the time it took to then subsequently attend and register as a patient. These factors also fit the cost-benefit analysis which the Health Benefit Model predicts takes place. Accident and emergency attendance therefore needs to be seen as a logical decision-making process that requires hospitals to provide appropriate services, rather than merely labelling the patients as inappropriate. PMID:8708188

  18. The accident evolution and barrier function (AEB) model applied to incident analysis in the processing industries.

    PubMed

    Svenson, O

    1991-09-01

    This study develops a theoretical model for accident evolutions and how they can be arrested. The model describes the interaction between technical and human-organizational systems which may lead to an accident. The analytic tool provided by the model gives equal weight to both these types of systems and necessitates simultaneous and interactive accident analysis by engineers and human factors specialists. It can be used in predictive safety analyses as well as in post hoc incident analyses. To illustrate this, the AEB model is applied to an incident reported by the nuclear industry in Sweden. In general, application of the model will indicate where and how safety can be improved, and it also raises questions about issues such as the cost, feasibility, and effectiveness of different ways of increasing safety.

  19. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  20. Estimation of traffic accident costs: a prompted model.

    PubMed

    Hejazi, Rokhshad; Shamsudin, Mad Nasir; Radam, Alias; Rahim, Khalid Abdul; Ibrahim, Zelina Zaitun; Yazdani, Saeed

    2013-01-01

    Traffic accidents are the reason for 25% of unnatural deaths in Iran. The main objective of this study is to find a simple model for the estimation of economic costs especially in Islamic countries (like Iran) in a straightforward manner. The model can show the magnitude of traffic accident costs with monetary equivalent. Data were collected from different sources that included traffic police records, insurance companies and hospitals. The conceptual framework, in our study, was based on the method of Ayati. He used this method for the estimation of economic costs in Iran. We promoted his method via minimum variables. Our final model has only three available variables which can be taken from insurance companies and police records. The running model showed that the traffic accident costs were US$2.2 million in 2007 for our case study route.

  1. Usefulness of high resolution coastal models for operational oil spill forecast: the "Full City" accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-11-01

    Oil spill modeling is considered to be an important part of a decision support system (DeSS) for oil spill combatment and is useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas, implying that low resolution basin scale ocean models are of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the "Full City" accident on the Norwegian south coast and compare operational simulations from three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws, but by applying ocean forcing data of higher resolution (1.5 km resolution), the model system shows results that compare well with observations. The study also shows that an ensemble of results from the three different models is useful when predicting/analyzing oil spill in coastal areas.

  2. An approach to accidents modeling based on compounds road environments.

    PubMed

    Fernandes, Ana; Neves, Jose

    2013-04-01

    The most common approach to study the influence of certain road features on accidents has been the consideration of uniform road segments characterized by a unique feature. However, when an accident is related to the road infrastructure, its cause is usually not a single characteristic but rather a complex combination of several characteristics. The main objective of this paper is to describe a methodology developed in order to consider the road as a complete environment by using compound road environments, overcoming the limitations inherented in considering only uniform road segments. The methodology consists of: dividing a sample of roads into segments; grouping them into quite homogeneous road environments using cluster analysis; and identifying the influence of skid resistance and texture depth on road accidents in each environment by using generalized linear models. The application of this methodology is demonstrated for eight roads. Based on real data from accidents and road characteristics, three compound road environments were established where the pavement surface properties significantly influence the occurrence of accidents. Results have showed clearly that road environments where braking maneuvers are more common or those with small radii of curvature and high speeds require higher skid resistance and texture depth as an important contribution to the accident prevention. PMID:23376544

  3. Prediction of Severe Accident Counter Current Natural Circulation Flows in the Hot Leg of a Pressurized Water Reactor

    SciTech Connect

    Boyd, Christopher F.

    2006-07-01

    During certain phases of a severe accident in a pressurized water reactor (PWR), the core becomes uncovered and steam carries heat to the steam generators through natural circulation. For PWR's with U-tube steam generators and loop seals filled with water, a counter current flow pattern is established in the hot leg. This flow pattern has been experimentally observed and has been predicted using computational fluid dynamics (CFD). Predictions of severe accident behavior are routinely carried out using severe accident system analysis codes such as SCDAP/RELAP5 or MELCOR. These codes, however, were not developed for predicting the three-dimensional natural circulation flow patterns during this phase of a severe accident. CFD, along with a set of experiments at 1/7. scale, have been historically used to establish the flow rates and mixing for the system analysis tools. One important aspect of these predictions is the counter current flow rate in the nearly 30 inch diameter hot leg between the reactor vessel and steam generator. This flow rate is strongly related to the amount of energy that can be transported away from the reactor core. This energy transfer plays a significant role in the prediction of core failures as well as potential failures in other reactor coolant system piping. CFD is used to determine the counter current flow rate during a severe accident. Specific sensitivities are completed for parameters such as surge line flow rates, hydrogen content, as well as vessel and steam generator temperatures. The predictions are carried out for the reactor vessel upper plenum, hot leg, a portion of the surge line, and a steam generator blocked off at the outlet plenum. All predictions utilize the FLUENT V6 CFD code. The volumetric flow in the hot leg is assumed to be proportional to the square root of the product of normalized density difference, gravity, and hydraulic diameter to the 5. power. CFD is used to determine the proportionality constant in the range

  4. Quantifying safety benefit of winter road maintenance: accident frequency modeling.

    PubMed

    Usman, Taimur; Fu, Liping; Miranda-Moreno, Luis F

    2010-11-01

    This research presents a modeling approach to investigate the association of the accident frequency during a snow storm event with road surface conditions, visibility and other influencing factors controlling for traffic exposure. The results have the premise to be applied for evaluating different maintenance strategies using safety as a performance measure. As part of this approach, this research introduces a road surface condition index as a surrogate measure of the commonly used friction measure to capture different road surface conditions. Data from various data sources, such as weather, road condition observations, traffic counts and accidents, are integrated and used to test three event-based models including the Negative Binomial model, the generalized NB model and the zero inflated NB model. These models are compared for their capability to explain differences in accident frequencies between individual snow storms. It was found that the generalized NB model best fits the data, and is most capable of capturing heterogeneity other than excess zeros. Among the main results, it was found that the road surface condition index was statistically significant influencing the accident occurrence. This research is the first showing the empirical relationship between safety and road surface conditions at a disaggregate level (event-based), making it feasible to quantify the safety benefits of alternative maintenance goals and methods.

  5. The five-factor model, conscientiousness, and driving accident involvement.

    PubMed

    Arthur, W; Graziano, W G

    1996-09-01

    Personality researchers and theorists are approaching consensus on the basic structure and constructs of personality. Despite the apparent consensus on the emergent five-factor model (Goldberg, 1992, 1993), less is known about external correlates of separate factors. This research examined the relations between Conscientiousness, one dimension of the model, and driving accident involvement. Using multiple measures in independent samples drawn from college students (N = 227) and a temporary employment agency (N = 250), the results generally demonstrate a significant inverse relation between Conscientiousness and driving accident involvement; individuals who rate themselves as more self-disciplined, responsible, reliable, and dependable are less likely to be involved in driving accidents than those who rate themselves lower on these attributes. The findings are consistent with other research demonstrating the relations among Conscientiousness and other tasks and job performance. Suggestions for future research are discussed. PMID:8776881

  6. Battery Life Predictive Model

    2009-12-31

    The Software consists of a model used to predict battery capacity fade and resistance growth for arbitrary cycling and temperature profiles. It allows the user to extrapolate from experimental data to predict actual life cycle.

  7. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  8. Catastrophe model of the accident process, safety climate, and anxiety.

    PubMed

    Guastello, Stephen J; Lynn, Mark

    2014-04-01

    This study aimed (a) to address the evidence for situational specificity in the connection between safety climate to occupational accidents, (b) to resolve similar issues between anxiety and accidents, (c) to expand and develop the concept of safety climate to include a wider range of organizational constructs, (d) to assess a cusp catastrophe model for occupational accidents where safety climate and anxiety are treated as bifurcation variables, and environ-mental hazards are asymmetry variables. Bifurcation, or trigger variables can have a positive or negative effect on outcomes, depending on the levels of asymmetry, or background variables. The participants were 1262 production employees of two steel manufacturing facilities who completed a survey that measured safety management, anxiety, subjective danger, dysregulation, stressors and hazards. Nonlinear regression analyses showed, for this industry, that the accident process was explained by a cusp catastrophe model in which safety management and anxiety were bifurcation variables, and hazards, age and experience were asymmetry variables. The accuracy of the cusp model (R2 = .72) exceeded that of the next best log-linear model (R2 = .08) composed from the same survey variables. The results are thought to generalize to any industry where serious injuries could occur, although situationally specific effects should be anticipated as well.

  9. Prediction in Multilevel Models

    ERIC Educational Resources Information Center

    Afshartous, David; de Leeuw, Jan

    2005-01-01

    Multilevel modeling is an increasingly popular technique for analyzing hierarchical data. This article addresses the problem of predicting a future observable y[subscript *j] in the jth group of a hierarchical data set. Three prediction rules are considered and several analytical results on the relative performance of these prediction rules are…

  10. Advanced accident sequence precursor analysis level 2 models

    SciTech Connect

    Galyean, W.J.; Brownson, D.A.; Rempe, J.L.

    1996-03-01

    The U.S. Nuclear Regulatory Commission Accident Sequence Precursor program pursues the ultimate objective of performing risk significant evaluations on operational events (precursors) occurring in commercial nuclear power plants. To achieve this objective, the Office of Nuclear Regulatory Research is supporting the development of simple probabilistic risk assessment models for all commercial nuclear power plants (NPP) in the U.S. Presently, only simple Level 1 plant models have been developed which estimate core damage frequencies. In order to provide a true risk perspective, the consequences associated with postulated core damage accidents also need to be considered. With the objective of performing risk evaluations in an integrated and consistent manner, a linked event tree approach which propagates the front end results to back end was developed. This approach utilizes simple plant models that analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude and timing of a radioactive release to the environment, and calculate the consequences for a given release. Detailed models and results from previous studies, such as the NUREG-1150 study, are used to quantify these simple models. These simple models are then linked to the existing Level 1 models, and are evaluated using the SAPHIRE code. To demonstrate the approach, prototypic models have been developed for a boiling water reactor, Peach Bottom, and a pressurized water reactor, Zion.

  11. Accident sequence precursor analysis level 2/3 model development

    SciTech Connect

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-02-01

    The US Nuclear Regulatory Commission`s Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models.

  12. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  13. Wind power prediction models

    NASA Technical Reports Server (NTRS)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  14. A catastrophe-theory model for simulating behavioral accidents

    SciTech Connect

    Souder, W.E.

    1988-01-01

    Based on a comprehensive data base of mining accidents, a computerized catastrophe model has been developed by the Bureau of Mines. This model systematically links individual psychological, group behavioral, and mine environmental variables with other accident causing factors. It answers several longstanding questions about why some normally safe behaving persons may spontaneously engage in unsafe acts that have high risks of serious injury. Field tests with the model indicate that it has three important uses: It can be used as an effective training aid for increasing employee safety consciousness; it can be used as a management laboratory for testing decision alternatives and policies; and it can be used to help design the most effective work teams.

  15. An application of probabilistic safety assessment methods to model aircraft systems and accidents

    SciTech Connect

    Martinez-Guridi, G.; Hall, R.E.; Fullwood, R.R.

    1998-08-01

    A case study modeling the thrust reverser system (TRS) in the context of the fatal accident of a Boeing 767 is presented to illustrate the application of Probabilistic Safety Assessment methods. A simplified risk model consisting of an event tree with supporting fault trees was developed to represent the progression of the accident, taking into account the interaction between the TRS and the operating crew during the accident, and the findings of the accident investigation. A feasible sequence of events leading to the fatal accident was identified. Several insights about the TRS and the accident were obtained by applying PSA methods. Changes proposed for the TRS also are discussed.

  16. System analysis with improved thermo-mechanical fuel rod models for modeling current and advanced LWR materials in accident scenarios

    NASA Astrophysics Data System (ADS)

    Porter, Ian Edward

    A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several

  17. Predicting pediatric posttraumatic stress disorder after road traffic accidents: the role of parental psychopathology.

    PubMed

    Kolaitis, Gerasimos; Giannakopoulos, George; Liakopoulou, Magda; Pervanidou, Panagiota; Charitaki, Stella; Mihas, Constantinos; Ferentinos, Spyros; Papassotiriou, Ioannis; Chrousos, George P; Tsiantis, John

    2011-08-01

    This study examined prospectively the role of parental psychopathology among other predictors in the development and persistence of posttraumatic stress disorder (PTSD) in 57 hospitalized youths aged 7-18 years immediately after a road traffic accident and 1 and 6 months later. Self report questionnaires and semistructured diagnostic interviews were used in all 3 assessments. Neuroendocrine evaluation was performed at the initial assessment. Maternal PTSD symptomatology predicted the development of children's PTSD 1 month after the event, OR = 6.99, 95% CI [1.049, 45.725]; the persistence of PTSD 6 months later was predicted by the child's increased evening salivary cortisol concentrations within 24 hours of the accident, OR = 1.006, 95% CI [1.001, 1.011]. Evaluation of both biological and psychosocial predictors that increase the risk for later development and maintenance of PTSD is important for appropriate early prevention and treatment. PMID:21812037

  18. Modelling the oil spill track from Prestige-Nassau accident

    NASA Astrophysics Data System (ADS)

    Montero, P.; Leitao, P.; Penabad, E.; Balseiro, C. F.; Carracedo, P.; Braunschweig, F.; Fernandes, R.; Gomez, B.; Perez-Munuzuri, V.; Neves, R.

    2003-04-01

    On November 13th 2002, the tank ship Prestige-Nassau sent a SOS signal. The hull of the ship was damaged producing an oil spill in front of the Galician coast (NW Spain). The damaged ship took north direction spilling more fuel and affecting the western Galician coast. After this, it changed its track to south. At this first stage of the accident, the ship spilt around 10000 Tm in 19th at the Galician Bank, at 133 NM of Galician coast. From the very beginning, a monitoring and forecasting of the first slick was developed. Afterwards, since southwesternly winds are frequent in wintertime, the slick from the initial spill started to move towards the Galician coast. This drift movement was followed by overflights. With the aim of forecasting the place and arriving date to the coast, some simulations with two different models were developed. The first one was a very simple drift model forced with the surface winds generated by ARPS operational model (1) at MeteoGalicia (regional weather forecast service). The second one was a more complex hydrodynamic model, MOHID2000 (2,3), developed by MARETEC GROUP (Instituto Superior Técnico de Lisboa) in collaboration with GFNL (Grupo de Física Non Lineal, Universidade de Santiago de Compostela). On November 28th, some tarballs appeared at south of main slick. This observations could be explained taking into account the below surface water movement following Ekman dynamic. Some new simulations with the aim of understanding better the physic underlying these observations were performed. Agreed between observations and simulations was achieved. We performed simulations with and without slope current previously calculated by other authors, showing that this current can only introduce subtle differences in the slick's arriving point to the coast and introducing wind as the primary forcing. (1) A two-dimensional particle tracking model for pollution dispersion in A Coruña and Vigo Rias (NW Spain). M. Gómez-Gesteira, P. Montero, R

  19. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    SciTech Connect

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  20. Markov Model of Severe Accident Progression and Management

    SciTech Connect

    Bari, R.A.; Cheng, L.; Cuadra,A.; Ginsberg,T.; Lehner,J.; Martinez-Guridi,G.; Mubayi,V.; Pratt,W.T.; Yue, M.

    2012-06-25

    The earthquake and tsunami that hit the nuclear power plants at the Fukushima Daiichi site in March 2011 led to extensive fuel damage, including possible fuel melting, slumping, and relocation at the affected reactors. A so-called feed-and-bleed mode of reactor cooling was initially established to remove decay heat. The plan was to eventually switch over to a recirculation cooling system. Failure of feed and bleed was a possibility during the interim period. Furthermore, even if recirculation was established, there was a possibility of its subsequent failure. Decay heat has to be sufficiently removed to prevent further core degradation. To understand the possible evolution of the accident conditions and to have a tool for potential future hypothetical evaluations of accidents at other nuclear facilities, a Markov model of the state of the reactors was constructed in the immediate aftermath of the accident and was executed under different assumptions of potential future challenges. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accident. The work began in mid-March and continued until mid-May 2011. The analysis had the following goals: (1) To provide an overall framework for describing possible future states of the damaged reactors; (2) To permit an impact analysis of 'what-if' scenarios that could lead to more severe outcomes; (3) To determine approximate probabilities of alternative end-states under various assumptions about failure and repair times of cooling systems; (4) To infer the reliability requirements of closed loop cooling systems needed to achieve stable core end-states and (5) To establish the importance for the results of the various cooling system and physical phenomenological parameters via sensitivity calculations.

  1. Atmospheric prediction model survey

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.

    1976-01-01

    As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.

  2. Advanced accident sequence precursor analysis level 1 models

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.; Schroeder, J.A.; Siu, N.O.

    1996-03-01

    INEL has been involved in the development of plant-specific Accident Sequence Precursor (ASP) models for the past two years. These models were developed for use with the SAPHIRE suite of PRA computer codes. They contained event tree/linked fault tree Level 1 risk models for the following initiating events: general transient, loss-of-offsite-power, steam generator tube rupture, small loss-of-coolant-accident, and anticipated transient without scram. Early in 1995 the ASP models were revised based on review comments from the NRC and an independent peer review. These models were released as Revision 1. The Office of Nuclear Regulatory Research has sponsored several projects at the INEL this fiscal year to further enhance the capabilities of the ASP models. Revision 2 models incorporates more detailed plant information into the models concerning plant response to station blackout conditions, information on battery life, and other unique features gleaned from an Office of Nuclear Reactor Regulation quick review of the Individual Plant Examination submittals. These models are currently being delivered to the NRC as they are completed. A related project is a feasibility study and model development of low power/shutdown (LP/SD) and external event extensions to the ASP models. This project will establish criteria for selection of LP/SD and external initiator operational events for analysis within the ASP program. Prototype models for each pertinent initiating event (loss of shutdown cooling, loss of inventory control, fire, flood, seismic, etc.) will be developed. A third project concerns development of enhancements to SAPHIRE. In relation to the ASP program, a new SAPHIRE module, GEM, was developed as a specific user interface for performing ASP evaluations. This module greatly simplifies the analysis process for determining the conditional core damage probability for a given combination of initiating events and equipment failures or degradations.

  3. Markov Model of Accident Progression at Fukushima Daiichi

    SciTech Connect

    Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.

  4. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    SciTech Connect

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  5. A simplified model for calculating atmospheric radionuclide transport and early health effects from nuclear reactor accidents

    SciTech Connect

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1995-11-01

    During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences.

  6. Uncertainty and sensitivity analysis of food pathway results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.

  7. Freeze Prediction Model

    NASA Technical Reports Server (NTRS)

    Morrow, C. T. (Principal Investigator)

    1981-01-01

    Measurements of wind speed, net irradiation, and of air, soil, and dew point temperatures in an orchard at the Rock Springs Agricultural Research Center, as well as topographical and climatological data and a description of the major apple growing regions of Pennsylvania were supplied to the University of Florida for use in running the P-model, freeze prediction program. Results show that the P-model appears to have considerable applicability to conditions in Pennsylvania. Even though modifications may have to be made for use in the fruit growing regions, there are advantages for fruit growers with the model in its present form.

  8. ATMOSPHERIC MODELING IN SUPPORT OF A ROADWAY ACCIDENT

    SciTech Connect

    Buckley, R.; Hunter, C.

    2010-10-21

    The United States Forest Service-Savannah River (USFS) routinely performs prescribed fires at the Savannah River Site (SRS), a Department of Energy (DOE) facility located in southwest South Carolina. This facility covers {approx}800 square kilometers and is mainly wooded except for scattered industrial areas containing facilities used in managing nuclear materials for national defense and waste processing. Prescribed fires of forest undergrowth are necessary to reduce the risk of inadvertent wild fires which have the potential to destroy large areas and threaten nuclear facility operations. This paper discusses meteorological observations and numerical model simulations from a period in early 2002 of an incident involving an early-morning multicar accident caused by poor visibility along a major roadway on the northern border of the SRS. At the time of the accident, it was not clear if the limited visibility was due solely to fog or whether smoke from a prescribed burn conducted the previous day just to the northwest of the crash site had contributed to the visibility. Through use of available meteorological information and detailed modeling, it was determined that the primary reason for the low visibility on this night was fog induced by meteorological conditions.

  9. Simulation Study of Traffic Accidents in Bidirectional Traffic Models

    NASA Astrophysics Data System (ADS)

    Moussa, Najem

    Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.

  10. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  11. Health effects model for nuclear power plant accident consequence analysis. Part I. Introduction, integration, and summary. Part II. Scientific basis for health effects models

    SciTech Connect

    Evans, J.S.; Moeller, D.W.; Cooper, D.W.

    1985-07-01

    Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence of dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.

  12. Uncertainty and sensitivity analysis of early exposure results with the MACCS Reactor Accident Consequence Model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.

  13. Model predicts global warming

    NASA Astrophysics Data System (ADS)

    Wainger, Lisa A.

    Global greenhouse warming will be clearly identifiable by the 1990s, according to eight scientists who have been studying climate changes using computer models. Researchers at NASA's Goddard Space Flight Center, Goddard Institute for Space Studies, New York, and the Massachusetts Institute of Technology, Cambridge, say that by the 2010s, most of the globe will be experiencing “substantial” warming. The level of warming will depend on amounts of trace gases, or greenhouse gases, in the atmosphere.Predictions for the next 70 years are based on computer simulations of Earth's climate. In three runs of the model, James Hansen and his colleagues looked at the effects of changing amounts of atmospheric gases with time.

  14. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    SciTech Connect

    Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.

  15. Mathematical model to predict drivers' reaction speeds.

    PubMed

    Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L

    2012-02-01

    Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions. PMID:22431214

  16. Mathematical model to predict drivers' reaction speeds.

    PubMed

    Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L

    2012-02-01

    Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions.

  17. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions

    SciTech Connect

    Heames, T.J. ); Williams, D.A.; Johns, N.A.; Chown, N.M. ); Bixler, N.E.; Grimley, A.J. ); Wheatley, C.J. )

    1990-10-01

    This document provides a description of a model of the radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident. This document serves as the user's manual for the computer code called VICTORIA, based upon the model. The VICTORIA code predicts fission product release from the fuel, chemical reactions between fission products and structural materials, vapor and aerosol behavior, and fission product decay heating. This document provides a detailed description of each part of the implementation of the model into VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided. The VICTORIA code was developed upon a CRAY-XMP at Sandia National Laboratories in the USA and a CRAY-2 and various SUN workstations at the Winfrith Technology Centre in England. 60 refs.

  18. Probabilistic microcell prediction model

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2002-06-01

    A microcell is a cell with 1-km or less radius which is suitable for heavily urbanized area such as a metropolitan city. This paper deals with the microcell prediction model of propagation loss which uses probabilistic techniques. The RSL (Receive Signal Level) is the factor which can evaluate the performance of a microcell and the LOS (Line-Of-Sight) component and the blockage loss directly effect on the RSL. We are combining the probabilistic method to get these performance factors. The mathematical methods include the CLT (Central Limit Theorem) and the SPC (Statistical Process Control) to get the parameters of the distribution. This probabilistic solution gives us better measuring of performance factors. In addition, it gives the probabilistic optimization of strategies such as the number of cells, cell location, capacity of cells, range of cells and so on. Specially, the probabilistic optimization techniques by itself can be applied to real-world problems such as computer-networking, human resources and manufacturing process.

  19. VICTORIA: A mechanistic model of radionuclide behavior in the reactor coolant system under severe accident conditions. Revision 1

    SciTech Connect

    Heams, T J; Williams, D A; Johns, N A; Mason, A; Bixler, N E; Grimley, A J; Wheatley, C J; Dickson, L W; Osborn-Lee, I; Domagala, P; Zawadzki, S; Rest, J; Alexander, C A; Lee, R Y

    1992-12-01

    The VICTORIA model of radionuclide behavior in the reactor coolant system (RCS) of a light water reactor during a severe accident is described. It has been developed by the USNRC to define the radionuclide phenomena and processes that must be considered in systems-level models used for integrated analyses of severe accident source terms. The VICTORIA code, based upon this model, predicts fission product release from the fuel, chemical reactions involving fission products, vapor and aerosol behavior, and fission product decay heating. Also included is a detailed description of how the model is implemented in VICTORIA, the numerical algorithms used, and the correlations and thermochemical data necessary for determining a solution. A description of the code structure, input and output, and a sample problem are provided.

  20. Prediction of the Possibility a Right-Turn Driving Behavior at Intersection Leads to an Accident by Detecting Deviation of the Situation from Usual when the Behavior is Observed

    NASA Astrophysics Data System (ADS)

    Hayashi, Toshinori; Yamada, Keiichi

    Deviation of driving behavior from usual could be a sign of human error that increases the risk of traffic accidents. This paper proposes a novel method for predicting the possibility a driving behavior leads to an accident from the information on the driving behavior and the situation. In a previous work, a method of predicting the possibility by detecting the deviation of driving behavior from usual one in that situation has been proposed. In contrast, the method proposed in this paper predicts the possibility by detecting the deviation of the situation from usual one when the behavior is observed. An advantage of the proposed method is the number of the required models is independent of the variety of the situations. The method was applied to a problem of predicting accidents by right-turn driving behavior at an intersection, and the performance of the method was evaluated by experiments on a driving simulator.

  1. RADIS - a regional nuclear accident consequence analysis model for Hong Kong

    SciTech Connect

    Yeung, Mankit Ray; Ching, E.M.K. )

    1993-02-01

    An atmospheric dispersion and consequence model called RADIS has been developed by the University of Hong Kong for nuclear accident consequence analysis. The model uses a two-dimensional plume trajectory derived from wind data for Hong Kong. Dose, health effects, and demographic models are also developed and implemented in RADIS so that accident consequences in 15 major population centers of Greater Hong Kong can be determined individually. In addition, benchmark testing results are give, and comparisons with the analytical solution and CRAC2 results are consistent and satisfactory. Sample calculational results for severe accident consequences are also presented to demonstrate the applicability of RADIS for dry and wet weather conditions.

  2. Estimation Of 137Cs Using Atmospheric Dispersion Models After A Nuclear Reactor Accident

    NASA Astrophysics Data System (ADS)

    Simsek, V.; Kindap, T.; Unal, A.; Pozzoli, L.; Karaca, M.

    2012-04-01

    Nuclear energy will continue to have an important role in the production of electricity in the world as the need of energy grows up. But the safety of power plants will always be a question mark for people because of the accidents happened in the past. Chernobyl nuclear reactor accident which happened in 26 April 1986 was the biggest nuclear accident ever. Because of explosion and fire large quantities of radioactive material was released to the atmosphere. The release of the radioactive particles because of accident affected not only its region but the entire Northern hemisphere. But much of the radioactive material was spread over west USSR and Europe. There are many studies about distribution of radioactive particles and the deposition of radionuclides all over Europe. But this was not true for Turkey especially for the deposition of radionuclides released after Chernobyl nuclear reactor accident and the radiation doses received by people. The aim of this study is to determine the radiation doses received by people living in Turkish territory after Chernobyl nuclear reactor accident and use this method in case of an emergency. For this purpose The Weather Research and Forecasting (WRF) Model was used to simulate meteorological conditions after the accident. The results of WRF which were for the 12 days after accident were used as input data for the HYSPLIT model. NOAA-ARL's (National Oceanic and Atmospheric Administration Air Resources Laboratory) dispersion model HYSPLIT was used to simulate the 137Cs distrubition. The deposition values of 137Cs in our domain after Chernobyl Nuclear Reactor Accident were between 1.2E-37 Bq/m2 and 3.5E+08 Bq/m2. The results showed that Turkey was affected because of the accident especially the Black Sea Region. And the doses were calculated by using GENII-LIN which is multipurpose health physics code.

  3. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  4. Safety analysis report for the Galileo Mission. Volume 2, book 1: Accident model document

    NASA Astrophysics Data System (ADS)

    1988-12-01

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence.

  5. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  6. Inter-comparison of dynamic models for radionuclide transfer to marine biota in a Fukushima accident scenario.

    PubMed

    Vives I Batlle, J; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R; Brown, J; Cheng, J-J; Ćujić, M; Dragović, S; Duffa, C; Fiévet, B; Hosseini, A; Jung, K T; Kamboj, S; Keum, D-K; Kryshev, A; LePoire, D; Maderich, V; Min, B-I; Periáñez, R; Sazykina, T; Suh, K-S; Yu, C; Wang, C; Heling, R

    2016-03-01

    We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of (90)Sr, (131)I and (137)Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters.

  7. Inter-comparison of dynamic models for radionuclide transfer to marine biota in a Fukushima accident scenario.

    PubMed

    Vives I Batlle, J; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R; Brown, J; Cheng, J-J; Ćujić, M; Dragović, S; Duffa, C; Fiévet, B; Hosseini, A; Jung, K T; Kamboj, S; Keum, D-K; Kryshev, A; LePoire, D; Maderich, V; Min, B-I; Periáñez, R; Sazykina, T; Suh, K-S; Yu, C; Wang, C; Heling, R

    2016-03-01

    We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of (90)Sr, (131)I and (137)Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters. PMID:26717350

  8. Cellular automata model simulating traffic car accidents in the on-ramp system

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2015-01-01

    In this paper, using Nagel-Schreckenberg model we study the on-ramp system under the expanded open boundary condition. The phase diagram of the two-lane on-ramp system is computed. It is found that the expanded left boundary insertion strategy enhances the flow in the on-ramp lane. Furthermore, we have studied the probability of the occurrence of car accidents. We distinguish two types of car accidents: the accident at the on-ramp site (Prc) and the rear-end accident in the main road (Pac). It is shown that car accidents at the on-ramp site are more likely to occur when traffic is free on road A. However, the rear-end accidents begin to occur above a critical injecting rate αc1. The influence of the on-ramp length (LB) and position (xC0) on the car accidents probabilities is studied. We found that large LB or xC0 causes an important decrease of the probability Prc. However, only large xC0 provokes an increase of the probability Pac. The effect of the stochastic randomization is also computed.

  9. Applying the random effect negative binomial model to examine traffic accident occurrence at signalized intersections.

    PubMed

    Chin, Hoong Chor; Quddus, Mohammed Abdul

    2003-03-01

    Poisson and negative binomial (NB) models have been used to analyze traffic accident occurrence at intersections for several years. There are however, limitations in the use of such models. The Poisson model requires the variance-to-mean ratio of the accident data to be about 1. Both the Poisson and the NB models require the accident data to be uncorrelated in time. Due to unobserved heterogeneity and serial correlation in the accident data, both models seem to be inappropriate. A more suitable alternative is the random effect negative binomial (RENB) model, which by treating the data in a time-series cross-section panel, will be able to deal with the spatial and temporal effects in the data. This paper describes the use of RENB model to identify the elements that affect intersection safety. To establish the suitability of the model, several goodness-of-fit statistics are used. The model is then applied to investigate the relationship between accident occurrence and the geometric, traffic and control characteristics of signalized intersections in Singapore. The results showed that 11 variables significantly affected the safety at the intersections. The total approach volumes, the numbers of phases per cycle, the uncontrolled left-turn lane and the presence of a surveillance camera are among the variables that are the highly significant. PMID:12504146

  10. A Statistical Approach to Predict the Failure Enthalpy and Reliability of Irradiated PWR Fuel Rods During Reactivity-Initiated Accidents

    SciTech Connect

    Nam, Cheol; Jeong, Yong-Hwan; Jung, Youn-Ho

    2001-11-15

    During the last decade, the failure behavior of high-burnup fuel rods under a reactivity-initiated accident (RIA) condition has been a serious concern since fuel rod failures at low enthalpy have been observed. This has resulted in the reassessment of existing licensing criteria and failure-mode study. To address the issue, a statistics-based methodology is suggested to predict failure probability of irradiated fuel rods under an RIA. Based on RIA simulation results in the literature, a failure enthalpy correlation for an irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. Using the failure enthalpy correlation, a new concept of ''equivalent enthalpy'' is introduced to reflect the effects of the three primary factors as well as peak fuel enthalpy into a single damage parameter. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Finally, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width, and cladding materials used.

  11. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Planetary atmosphere modeling and predictions

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    The capability to generate spacecraft frequency predictions which include the refractive bending effects induced during signal passage through a planetary atmosphere is a pivotal element of the DSN Radio Science System. This article describes the current implementation effort to develop planetary atmosphere modeling and prediction capability.

  13. Proton Fluence Prediction Models

    NASA Technical Reports Server (NTRS)

    Feynman, Joan

    1996-01-01

    Many spacecraft anomalies are caused by positively charged high energy particles impinging on the vehicle and its component parts. Here we review the current knowledge of the interplanetary particle environment in the energy ranges that are most important for these effects, 10 to 100 MeV/amu. The emphasis is on the particle environment at 1 AU. State-of-the-art engineering models are briefly described along with comments on the future work required in this field.

  14. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  15. Development of comprehensive accident models for two-lane rural highways using exposure, geometry, consistency and context variables.

    PubMed

    Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant

    2010-07-01

    In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with

  16. Development of comprehensive accident models for two-lane rural highways using exposure, geometry, consistency and context variables.

    PubMed

    Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant

    2010-07-01

    In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with

  17. Effects of a type of quenched randomness on car accidents in a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Yang, Xian-Qing; Zhang, Wei; Qiu, Kang; Zhao, Yue-Min

    2006-01-01

    In this paper we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a defect. In the deterministic NS model, numerical results show that there exists a critical value of car density below which no car accident happens. The critical density ρc1 is not related only to the maximum speed of cars, but also to the braking probability at the defect. The braking probability at a defect can enhance, not suppress, the occurrence of car accidents when its value is small. Only the braking probability at the defect is very large, car accidents can be reduced by the bottleneck. In the nondeterministic NS model, the probability Pac exhibits the same behaviors with that in the deterministic model except the case of vmax=1 under which the probability Pac is only reduced by the defect. The defect also induces the inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the deterministic NS model and in the nondeterministic NS model with vmax=1 in the case of large defect braking probability.

  18. Wind-field modeling of nuclear accident consequences for Hong Kong

    SciTech Connect

    Yeung, M.R.; Lui, W.S.

    1997-06-01

    An efficient weighted interpolation technique is used to generate a time series of wind fields from the measurements of seven strategically located weather stations in Greater Hong Kong. This wind-field model, HKWIND, is integrated with the atmospheric dispersion/consequence model RADIS to form a complete off-site nuclear accident analysis package. A study is also performed that compares the calculational results of accident consequences with and without wind-field models. The inclusion of the wind-field model has a drastic effect on the puff trajectory and subsequently increases the frequencies of early fatality, early injuries, and latent cancers for Hong Kong.

  19. Predictive models of battle dynamics

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2001-09-01

    The application of control and game theories to improve battle planning and execution requires models, which allow military strategists and commanders to reliably predict the expected outcomes of various alternatives over a long horizon into the future. We have developed probabilistic battle dynamics models, whose building blocks in the form of Markov chains are derived from the first principles, and applied them successfully in the design of the Model Predictive Task Commander package. This paper introduces basic concepts of our modeling approach and explains the probability distributions needed to compute the transition probabilities of the Markov chains.

  20. A crash-prediction model for road tunnels.

    PubMed

    Caliendo, Ciro; De Guglielmo, Maria Luisa; Guida, Maurizio

    2013-06-01

    Considerable research has been carried out into open roads to establish relationships between crashes and traffic flow, geometry of infrastructure and environmental factors, whereas crash-prediction models for road tunnels, have rarely been investigated. In addition different results have been sometimes obtained regarding the effects of traffic and geometry on crashes in road tunnels. However, most research has focused on tunnels where traffic and geometric conditions, as well as driving behaviour, differ from those in Italy. Thus, in this paper crash prediction-models that had not yet been proposed for Italian road tunnels have been developed. For the purpose, a 4-year monitoring period extending from 2006 to 2009 was considered. The tunnels investigated are single-tube ones with unidirectional traffic. The Bivariate Negative Binomial regression model, jointly applied to non-severe crashes (accidents involving material-damage only) and severe crashes (fatal and injury accidents only), was used to model the frequency of accident occurrence. The year effect on severe crashes was also analyzed by the Random Effects Binomial regression model and the Negative Multinomial regression model. Regression parameters were estimated by the Maximum Likelihood Method. The Cumulative Residual Method was used to test the adequacy of the regression model through the range of annual average daily traffic per lane. The candidate set of variables was: tunnel length (L), annual average daily traffic per lane (AADTL), percentage of trucks (%Tr), number of lanes (NL), and the presence of a sidewalk. Both for non-severe crashes and severe crashes, prediction-models showed that significant variables are: L, AADTL, %Tr, and NL. A significant year effect consisting in a systematic reduction of severe crashes over time was also detected. The analysis developed in this paper appears to be useful for many applications such as the estimation of accident reductions due to improvement in existing

  1. Object-Oriented Bayesian Networks (OOBN) for Aviation Accident Modeling and Technology Portfolio Impact Assessment

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.

    2012-01-01

    The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.

  2. Model aids cuttings transport prediction

    SciTech Connect

    Gavignet, A.A. ); Sobey, I.J. )

    1989-09-01

    Drilling of highly deviated wells can be complicated by the formation of a thick bed of cuttings at low flow rates. The model proposed in this paper shows what mechanisms control the thickness of such a bed, and the model predictions are compared with experimental results.

  3. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    SciTech Connect

    Not Available

    1988-12-15

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source. The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.

  4. Development of fission-products transport model in severe-accident scenarios for Scdap/Relap5

    NASA Astrophysics Data System (ADS)

    Honaiser, Eduardo Henrique Rangel

    The understanding and estimation of the release of fission products during a severe accident became one of the priorities of the nuclear community after 1980, with the events of the Three-mile Island unit 2 (TMI-2), in 1979, and Chernobyl accidents, in 1986. Since this time, theoretical developments and experiments have shown that the primary circuit systems of light water reactors (LWR) have the potential to attenuate the release of fission products, a fact that had been neglected before. An advanced tool, compatible with nuclear thermal-hydraulics integral codes, is developed to predict the retention and physical evolution of the fission products in the primary circuit of LWRs, without considering the chemistry effects. The tool embodies the state-of-the-art models for the involved phenomena as well as develops new models. The capabilities acquired after the implementation of this tool in the Scdap/Relap5 code can be used to increase the accuracy of probability safety assessment (PSA) level 2, enhance the reactor accident management procedures and design new emergency safety features.

  5. Modeling the early-phase redistribution of radiocesium fallouts in an evergreen coniferous forest after Chernobyl and Fukushima accidents.

    PubMed

    Calmon, P; Gonze, M-A; Mourlon, Ch

    2015-10-01

    Following the Chernobyl accident, the scientific community gained numerous data on the transfer of radiocesium in European forest ecosystems, including information regarding the short-term redistribution of atmospheric fallout onto forest canopies. In the course of international programs, the French Institute for Radiological Protection and Nuclear Safety (IRSN) developed a forest model, named TREE4 (Transfer of Radionuclides and External Exposure in FORest systems), 15 years ago. Recently published papers on a Japanese evergreen coniferous forest contaminated by Fukushima radiocesium fallout provide interesting and quantitative data on radioactive mass fluxes measured within the forest in the months following the accident. The present study determined whether the approach adopted in the TREE4 model provides satisfactory results for Japanese forests or whether it requires adjustments. This study focused on the interception of airborne radiocesium by forest canopy, and the subsequent transfer to the forest floor through processes such as litterfall, throughfall, and stemflow, in the months following the accident. We demonstrated that TREE4 quite satisfactorily predicted the interception fraction (20%) and the canopy-to-soil transfer (70% of the total deposit in 5 months) in the Tochigi forest. This dynamics was similar to that observed in the Höglwald spruce forest. However, the unexpectedly high contribution of litterfall (31% in 5 months) in the Tochigi forest could not be reproduced in our simulations (2.5%). Possible reasons for this discrepancy are discussed; and sensitivity of the results to uncertainty in deposition conditions was analyzed.

  6. Final safety analysis report for the Galileo Mission: Volume 2, Book 2: Accident model document: Appendices

    SciTech Connect

    Not Available

    1988-12-15

    This section of the Accident Model Document (AMD) presents the appendices which describe the various analyses that have been conducted for use in the Galileo Final Safety Analysis Report II, Volume II. Included in these appendices are the approaches, techniques, conditions and assumptions used in the development of the analytical models plus the detailed results of the analyses. Also included in these appendices are summaries of the accidents and their associated probabilities and environment models taken from the Shuttle Data Book (NSTS-08116), plus summaries of the several segments of the recent GPHS safety test program. The information presented in these appendices is used in Section 3.0 of the AMD to develop the Failure/Abort Sequence Trees (FASTs) and to determine the fuel releases (source terms) resulting from the potential Space Shuttle/IUS accidents throughout the missions.

  7. Hydrometeorological model for streamflow prediction

    USGS Publications Warehouse

    Tangborn, Wendell V.

    1979-01-01

    The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)

  8. Illustration interface of accident progression in PWR by quick inference based on multilevel flow models

    SciTech Connect

    Yoshikawa, H.; Ouyang, J.; Niwa, Y.

    2006-07-01

    In this paper, a new accident inference method is proposed by using a goal and function oriented modeling method called Multilevel Flow Model focusing on explaining the causal-consequence relations and the objective of automatic action in the accident of nuclear power plant. Users can easily grasp how the various plant parameters will behave and how the various safety facilities will be activated sequentially to cope with the accident until the nuclear power plants are settled into safety state, i.e., shutdown state. The applicability of the developed method was validated by the conduction of internet-based 'view' experiment to the voluntary respondents, and in the future, further elaboration of interface design and the further introduction of instruction contents will be developed to make it become the usable CAI system. (authors)

  9. Radiological assessment by compartment model POSEIDON-R of radioactivity released in the ocean following Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Bezhenar, Roman; Maderich, Vladimir; Heling, Rudie; Jung, Kyung Tae; Myoung, Jung-Goo

    2013-04-01

    The modified compartment model POSEIDON-R (Lepicard et al, 2004), was applied to the North-Western Pacific and adjacent seas. It is for the first time, that a compartment model was used in this region, where 25 Nuclear Power Plants (NPP) are operated. The aim of this study is to perform a radiological assessment of the releases of radioactivity due to the Fukushima Daiichi accident. The model predicts the dispersion of radioactivity in water column and in the sediments, and the transfer of radionuclides throughout the marine food web, and the subsequent doses to the population due to the consumption of fishery products. A generic predictive dynamical food-chain model is used instead of concentration factor (CF) approach. The radionuclide uptake model for fish has as central feature the accumulation of radionuclides in the target tissue. Three layer structure of the water column makes it possible to describe deep-water transport adequately. In total 175 boxes cover the Northwestern Pacific, the East China Sea, and the Yellow Sea and East/Japan Sea. Water fluxes between boxes were calculated by averaging three-dimensional currents obtained by hydrodynamic model ROMS over a 10-years period. Tidal mixing between boxes was parameterized. The model was validated on observation data on the Cs-137 in water for the period 1945-2004. The source terms from nuclear weapon tests are regional source term from the bomb tests on Atoll Enewetak and Atoll Bikini and global deposition from weapons tests. The correlation coefficient between predicted and observed concentrations of Cs-137 in the surface water is 0.925 and RMSE=1.43 Bq/m3. A local-scale coastal box was used according POSEIDON's methodology to describe local processes of activity transport, deposition and food web around the Fukushima Daiichi NPP. The source term to the ocean from the Fukushima accident includes a 10-days release of Cs-134 (5 PBq) and Cs-137 (4 PBq) directly into the ocean and 6 and 5 PBq of Cs-134 and

  10. What do saliency models predict?

    PubMed Central

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  11. PREDICTIVE MODELS. Enhanced Oil Recovery Model

    SciTech Connect

    Ray, R.M.

    1992-02-26

    PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1 chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2 carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3 in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4 polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5 steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.

  12. Predictive Models of Liver Cancer

    EPA Science Inventory

    Predictive models of chemical-induced liver cancer face the challenge of bridging causative molecular mechanisms to adverse clinical outcomes. The latent sequence of intervening events from chemical insult to toxicity are poorly understood because they span multiple levels of bio...

  13. Using meteorological ensembles for atmospheric dispersion modelling of the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Korsakissok, Irène; Mallet, Vivien; Mathieu, Anne; Sekiyama, Thomas; Didier, Damien; Kajino, Mizuo; Igarashi, Yasuhito; Adachi, Kouji

    2016-04-01

    Dispersion models are used in response to an accidental release of radionuclides of the atmosphere, to infer mitigation actions, and complement field measurements for the assessment of short and long term environmental and sanitary impacts. However, the predictions of these models are subject to important uncertainties, especially due to input data, such as meteorological fields or source term. This is still the case more than four years after the Fukushima disaster (Korsakissok et al., 2012, Girard et al., 2014). In the framework of the SAKURA project, an MRI-IRSN collaboration, a meteorological ensemble of 20 members designed by MRI (Sekiyama et al. 2013) was used with IRSN's atmospheric dispersion models. Another ensemble, retrieved from ECMWF and comprising 50 members, was also used for comparison. The MRI ensemble is 3-hour assimilated, with a 3-kilometers resolution, designed to reduce the meteorological uncertainty in the Fukushima case. The ECMWF is a 24-hour forecast with a coarser grid, representative of the uncertainty of the data available in a crisis context. First, it was necessary to assess the quality of the ensembles for our purpose, to ensure that their spread was representative of the uncertainty of meteorological fields. Using meteorological observations allowed characterizing the ensembles' spread, with tools such as Talagrand diagrams. Then, the uncertainty was propagated through atmospheric dispersion models. The underlying question is whether the output spread is larger than the input spread, that is, whether small uncertainties in meteorological fields can produce large differences in atmospheric dispersion results. Here again, the use of field observations was crucial, in order to characterize the spread of the ensemble of atmospheric dispersion simulations. In the case of the Fukushima accident, gamma dose rates, air activities and deposition data were available. Based on these data, selection criteria for the ensemble members were

  14. Severe Nuclear Accident Program (SNAP) - a real time model for accidental releases

    SciTech Connect

    Saltbones, J.; Foss, A.; Bartnicki, J.

    1996-12-31

    The model: Several Nuclear Accident Program (SNAP) has been developed at the Norwegian Meteorological Institute (DNMI) in Oslo to provide decision makers and Government officials with real-time tool for simulating large accidental releases of radioactivity from nuclear power plants or other sources. SNAP is developed in the Lagrangian framework in which atmospheric transport of radioactive pollutants is simulated by emitting a large number of particles from the source. The main advantage of the Lagrangian approach is a possibility of precise parameterization of advection processes, especially close to the source. SNAP can be used to predict the transport and deposition of a radioactive cloud in e future (up to 48 hours, in the present version) or to analyze the behavior of the cloud in the past. It is also possible to run the model in the mixed mode (partly analysis and partly forecast). In the routine run we assume unit (1 g s{sup -1}) emission in each of three classes. This assumption is very convenient for the main user of the model output in case of emergency: Norwegian Radiation Protection Agency. Due to linearity of the model equations, user can test different emission scenarios as a post processing task by assigning different weights to concentration and deposition fields corresponding to each of three emission classes. SNAP is fully operational and can be run by the meteorologist on duty at any time. The output from SNAP has two forms: First on the maps of Europe, or selected parts of Europe, individual particles are shown during the simulation period. Second, immediately after the simulation, concentration/deposition fields can be shown every three hours of the simulation period as isoline maps for each emission class. In addition, concentration and deposition maps, as well as some meteorological data, are stored on a public accessible disk for further processing by the model users.

  15. A dynamic model for evaluating radionuclide distribution in forests from nuclear accidents.

    PubMed

    Schell, W R; Linkov, I; Myttenaere, C; Morel, B

    1996-03-01

    The Chernobyl Nuclear Power Plant accident in 1986 caused radionuclide contamination in most countries in Eastern and Western Europe. A prime example is Belarus where 23% of the total land area received chronic levels; about 1.5 x 10(6) ha of forested lands were contaminated with 40--190 kBq m-2 and 2.5 x 10(4) ha received greater than 1,480 kBq m-2 of 137Cs and other long-lived radionuclides such as 90Sr and 239,240Pu. Since the radiological dose to the forest ecosystem will tend to accumulate over long time periods (decades to centuries), we need to determine what countermeasures can be taken to limit this dose so that the affected regions can, once again, safely provide habitat and natural forest products. To address some of these problems, our initial objective is to formulate a generic model, FORESTPATH, which describes the major kinetic processes and pathways of radionuclide movement in forests and natural ecosystems and which can be used to predict future radionuclide concentrations. The model calculates the time-dependent radionuclide concentrations in different compartments of the forest ecosystem based on the information available on residence half-times in two forest types: coniferous and deciduous. The results show that the model reproduces well the radionuclide cycling pattern found in the literature for deciduous and coniferous forests. Variability analysis was used to access the relative importance of specific parameter values in the generic model performance. The FORESTPASTH model can be easily adjusted for site-specific applications.

  16. A dynamic model to estimate the activity concentration and whole body dose rate of marine biota as consequences of a nuclear accident.

    PubMed

    Keum, Dong-Kwon; Jun, In; Kim, Byeong-Ho; Lim, Kwang-Muk; Choi, Yong-Ho

    2015-02-01

    This paper describes a dynamic compartment model (K-BIOTA-DYN-M) to assess the activity concentration and whole body dose rate of marine biota as a result of a nuclear accident. The model considers the transport of radioactivity between the marine biota through the food chain, and applies the first order kinetic model for the sedimentation of radionuclides from seawater onto sediment. A set of ordinary differential equations representing the model are simultaneously solved to calculate the activity concentration of the biota and the sediment, and subsequently the dose rates, given the seawater activity concentration. The model was applied to investigate the long-term effect of the Fukushima nuclear accident on the marine biota using (131)I, (134)Cs, and, (137)Cs activity concentrations of seawater measured for up to about 2.5 years after the accident at two locations in the port of the Fukushima Daiichi Nuclear Power Station (FDNPS) which was the most highly contaminated area. The predicted results showed that the accumulated dose for 3 months after the accident was about 4-4.5Gy, indicating the possibility of occurrence of an acute radiation effect in the early phase after the Fukushima accident; however, the total dose rate for most organisms studied was usually below the UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)'s bench mark level for chronic exposure except for the initial phase of the accident, suggesting a very limited radiological effect on the marine biota at the population level. The predicted Cs sediment activity by the first-order kinetic model for the sedimentation was in a good agreement with the measured activity concentration. By varying the ecological parameter values, the present model was able to predict the very scattered (137)Cs activity concentrations of fishes measured in the port of FDNPS. Conclusively, the present dynamic model can be usefully applied to estimate the activity concentration and whole

  17. Traffic accidents in a cellular automaton model with a speed limit zone

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Yang, Xian-qing; Sun, Da-peng; Qiu, Kang; Xia, Hui

    2006-07-01

    In this paper, we numerically study the probability Pac of the occurrence of car accidents in the Nagel-Schreckenberg (NS) model with a speed limit zone. Numerical results show that the probability for car accidents to occur Pac is determined by the maximum speed v'max of the speed limit zone, but is independent of the length Lv of the speed limit zone in the deterministic NS model. However in the nondeterministic NS model, the probability of the occurrence of car accidents Pac is determined not only by the maximum speed v'max, but also the length Lv. The probability Pac increases accordingly with the increase of the maximum speed of the speed limit zone, but decreases with the increase of the length of the speed limit zone, in the low-density region. However in the case of v'max = 1, the probability Pac increases with the increase of the length in the low-density region, but decreases in the interval between the low-density and high-density regions. The speed limit zone also causes an inhomogeneous distribution of car accidents over the whole road. Theoretical analyses give an agreement with numerical results in the nondeterministic NS model with v'max = 1 and vmax = 5.

  18. Input-output model for MACCS nuclear accident impacts estimation¹

    SciTech Connect

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  19. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  20. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  1. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  2. Accident Sequence Precursor Program Large Early Release Frequency Model Development

    SciTech Connect

    Brown, T.D.; Brownson, D.A.; Duran, F.A.; Gregory, J.J.; Rodrick, E.G.

    1999-01-04

    The objectives for the ASP large early release frequency (LERF) model development work is to build a Level 2 containment response model that would capture all of the events necessary to define LERF as outlined in Regulatory Guide 1.174, can be directly interfaced with the existing Level 1 models, is technically correct, can be readily modified to incorporate new information or to represent another plant, and can be executed in SAPHIRE. The ASP LERF models being developed will meet these objectives while providing the NRC with the capability to independently assess the risk impact of plant-specific changes proposed by the utilities that change the nuclear power plants' licensing basis. Together with the ASP Level 1 models, the ASP LERF models provide the NRC with the capability of performing equipment and event assessments to determine their impact on a plant's LERF for internal events during power operation. In addition, the ASP LERF models are capable of being updated to reflect changes in information regarding the system operations and phenomenological events, and of being updated to assess the potential for early fatalities for each LERF sequence. As the ASP Level 1 models evolve to include more analysis capabilities, the LERF models will also be refined to reflect the appropriate level of detail needed to demonstrate the new capabilities. An approach was formulated for the development of detailed LERF models using the NUREG-1150 APET models as a guide. The modifications to the SAPHIRE computer code have allowed the development of these detailed models and the ability to analyze these models in a reasonable time. Ten reference LERF plant models, including six PWR models and four BWR models, which cover a wide variety of containment and nuclear steam supply systems designs, will be complete in 1999. These reference models will be used as the starting point for developing the LERF models for the remaining nuclear power plants.

  3. A nuclear plant accident diagnosis method to support prediction of errors of commission

    SciTech Connect

    Chang, Y. H. J.; Coyne, K.; Mosleh, A.

    2006-07-01

    The identification and mitigation of operator errors of commission (EOCs) continue to be a major focus of nuclear plant human reliability research. Current Human Reliability Analysis (HRA) methods for predicting EOCs generally rely on the availability of operating procedures or extensive use of expert judgment. Consequently, an analysis for EOCs cannot easily be performed for actions that may be taken outside the scope of the operating procedures. Additionally, current HRA techniques rarely capture an operator's 'creative' problem-solving behavior. However, a nuclear plant operator knowledge base developed for the use with the IDAC (Information, Decision, and Action in Crew context) cognitive model shows potential for addressing these limitations. This operator knowledge base currently includes an event-symptom diagnosis matrix for a pressurized water reactor (PWR) nuclear plant. The diagnosis matrix defines a probabilistic relationship between observed symptoms and plant events that models the operator's heuristic process for classifying a plant state. Observed symptoms are obtained from a dynamic thermal-hydraulic plant model and can be modified to account for the limitations of human perception and cognition. A fuzzy-logic inference technique is used to calculate the operator's confidence, or degree of belief, that a given plant event has occurred based on the observed symptoms. An event diagnosis can be categorized as either: (a) a generalized flow imbalance of basic thermal-hydraulic properties (e.g., a mass or energy flow imbalance in the reactor coolant system), or (b) a specific event type, such as a steam generator tube rupture or a reactor trip. When an operator is presented with incomplete or contradictory information, this diagnosis approach provides a means to identify situations where an operator might be misled to perform unsafe actions based on an incorrect diagnosis. This knowledge base model could also support identification of potential EOCs when

  4. Generation IV benchmarking of TRISO fuel performance models under accident conditions. Modeling input data

    SciTech Connect

    Blaise Collin

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document

  5. A flammability and combustion model for integrated accident analysis. [Advanced light water reactors

    SciTech Connect

    Plys, M.G.; Astleford, R.D.; Epstein, M. )

    1988-01-01

    A model for flammability characteristics and combustion of hydrogen and carbon monoxide mixtures is presented for application to severe accident analysis of Advanced Light Water Reactors (ALWR's). Flammability of general mixtures for thermodynamic conditions anticipated during a severe accident is quantified with a new correlation technique applied to data for several fuel and inertant mixtures and using accepted methods for combining these data. Combustion behavior is quantified by a mechanistic model consisting of a continuity and momentum balance for the burned gases, and considering an uncertainty parameter to match the idealized process to experiment. Benchmarks against experiment demonstrate the validity of this approach for a single recommended value of the flame flux multiplier parameter. The models presented here are equally applicable to analysis of current LWR's. 21 refs., 16 figs., 6 tabs.

  6. Time series count data models: an empirical application to traffic accidents.

    PubMed

    Quddus, Mohammed A

    2008-09-01

    Count data are primarily categorised as cross-sectional, time series, and panel. Over the past decade, Poisson and Negative Binomial (NB) models have been used widely to analyse cross-sectional and time series count data, and random effect and fixed effect Poisson and NB models have been used to analyse panel count data. However, recent literature suggests that although the underlying distributional assumptions of these models are appropriate for cross-sectional count data, they are not capable of taking into account the effect of serial correlation often found in pure time series count data. Real-valued time series models, such as the autoregressive integrated moving average (ARIMA) model, introduced by Box and Jenkins have been used in many applications over the last few decades. However, when modelling non-negative integer-valued data such as traffic accidents at a junction over time, Box and Jenkins models may be inappropriate. This is mainly due to the normality assumption of errors in the ARIMA model. Over the last few years, a new class of time series models known as integer-valued autoregressive (INAR) Poisson models, has been studied by many authors. This class of models is particularly applicable to the analysis of time series count data as these models hold the properties of Poisson regression and able to deal with serial correlation, and therefore offers an alternative to the real-valued time series models. The primary objective of this paper is to introduce the class of INAR models for the time series analysis of traffic accidents in Great Britain. Different types of time series count data are considered: aggregated time series data where both the spatial and temporal units of observation are relatively large (e.g., Great Britain and years) and disaggregated time series data where both the spatial and temporal units are relatively small (e.g., congestion charging zone and months). The performance of the INAR models is compared with the class of Box and

  7. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality

    PubMed Central

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-01-01

    Background Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. Objectives This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. Materials and Methods In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. Results The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. Conclusions There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents. PMID:27800467

  8. Initial VHTR accident scenario classification: models and data.

    SciTech Connect

    Vilim, R. B.; Feldman, E. E.; Pointer, W. D.; Wei, T. Y. C.; Nuclear Engineering Division

    2005-09-30

    Nuclear systems codes are being prepared for use as computational tools for conducting performance/safety analyses of the Very High Temperature Reactor. The thermal-hydraulic codes are RELAP5/ATHENA for one-dimensional systems modeling and FLUENT and/or Star-CD for three-dimensional modeling. We describe a formal qualification framework, the development of Phenomena Identification and Ranking Tables (PIRTs), the initial filtering of the experiment databases, and a preliminary screening of these codes for use in the performance/safety analyses. In the second year of this project we focused on development of PIRTS. Two events that result in maximum fuel and vessel temperatures, the Pressurized Conduction Cooldown (PCC) event and the Depressurized Conduction Cooldown (DCC) event, were selected for PIRT generation. A third event that may result in significant thermal stresses, the Load Change event, is also selected for PIRT generation. Gas reactor design experience and engineering judgment were used to identify the important phenomena in the primary system for these events. Sensitivity calculations performed with the RELAP5 code were used as an aid to rank the phenomena in order of importance with respect to the approach of plant response to safety limits. The overall code qualification methodology was illustrated by focusing on the Reactor Cavity Cooling System (RCCS). The mixed convection mode of heat transfer and pressure drop is identified as an important phenomenon for Reactor Cavity Cooling System (RCCS) operation. Scaling studies showed that the mixed convection mode is likely to occur in the RCCS air duct during normal operation and during conduction cooldown events. The RELAP5/ATHENA code was found to not adequately treat the mixed convection regime. Readying the code will require adding models for the turbulent mixed convection regime while possibly performing new experiments for the laminar mixed convection regime. Candidate correlations for the turbulent

  9. Chernobyl and Fukushima nuclear accidents: what has changed in the use of atmospheric dispersion modeling?

    PubMed

    Benamrane, Y; Wybo, J-L; Armand, P

    2013-12-01

    The threat of a major accidental or deliberate event that would lead to hazardous materials emission in the atmosphere is a great cause of concern to societies. This is due to the potential large scale of casualties and damages that could result from the release of explosive, flammable or toxic gases from industrial plants or transport accidents, radioactive material from nuclear power plants (NPPs), and chemical, biological, radiological or nuclear (CBRN) terrorist attacks. In order to respond efficiently to such events, emergency services and authorities resort to appropriate planning and organizational patterns. This paper focuses on the use of atmospheric dispersion modeling (ADM) as a support tool for emergency planning and response, to assess the propagation of the hazardous cloud and thereby, take adequate counter measures. This paper intends to illustrate the noticeable evolution in the operational use of ADM tools over 25 y and especially in emergency situations. This study is based on data available in scientific publications and exemplified using the two most severe nuclear accidents: Chernobyl (1986) and Fukushima (2011). It appears that during the Chernobyl accident, ADM were used few days after the beginning of the accident mainly in a diagnosis approach trying to reconstruct what happened, whereas 25 y later, ADM was also used during the first days and weeks of the Fukushima accident to anticipate the potentially threatened areas. We argue that the recent developments in ADM tools play an increasing role in emergencies and crises management, by supporting stakeholders in anticipating, monitoring and assessing post-event damages. However, despite technological evolutions, its prognostic and diagnostic use in emergency situations still arise many issues.

  10. Simulation Modeling Requirements for Loss-of-Control Accident Prevention of Turboprop Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Crider, Dennis; Foster, John V.

    2012-01-01

    In-flight loss of control remains the leading contributor to aviation accident fatalities, with stall upsets being the leading causal factor. The February 12, 2009. Colgan Air, Inc., Continental Express flight 3407 accident outside Buffalo, New York, brought this issue to the forefront of public consciousness and resulted in recommendations from the National Transportation Safety Board to conduct training that incorporates stalls that are fully developed and develop simulator standards to support such training. In 2010, Congress responded to this accident with Public Law 11-216 (Section 208), which mandates full stall training for Part 121 flight operations. Efforts are currently in progress to develop recommendations on implementation of stall training for airline pilots. The International Committee on Aviation Training in Extended Envelopes (ICATEE) is currently defining simulator fidelity standards that will be necessary for effective stall training. These recommendations will apply to all civil transport aircraft including straight-wing turboprop aircraft. Government-funded research over the previous decade provides a strong foundation for stall/post-stall simulation for swept-wing, conventional tail jets to respond to this mandate, but turboprops present additional and unique modeling challenges. First among these challenges is the effect of power, which can provide enhanced flow attachment behind the propellers. Furthermore, turboprops tend to operate for longer periods in an environment more susceptible to ice. As a result, there have been a significant number of turboprop accidents as a result of the early (lower angle of attack) stalls in icing. The vulnerability of turboprop configurations to icing has led to studies on ice accumulation and the resulting effects on flight behavior. Piloted simulations of these effects have highlighted the important training needs for recognition and mitigation of icing effects, including the reduction of stall margins

  11. Development of a Gravid Uterus Model for the Study of Road Accidents Involving Pregnant Women.

    PubMed

    Auriault, F; Thollon, L; Behr, M

    2016-01-01

    Car accident simulations involving pregnant women are well documented in the literature and suggest that intra-uterine pressure could be responsible for the phenomenon of placental abruption, underlining the need for a realistic amniotic fluid model, including fluid-structure interactions (FSI). This study reports the development and validation of an amniotic fluid model using an Arbitrary Lagrangian Eulerian formulation in the LS-DYNA environment. Dedicated to the study of the mechanisms responsible for fetal injuries resulting from road accidents, the fluid model was validated using dynamic loading tests. Drop tests were performed on a deformable water-filled container at acceleration levels that would be experienced in a gravid uterus during a frontal car collision at 25 kph. During the test device braking phase, container deformation induced by inertial effects and FSI was recorded by kinematic analysis. These tests were then simulated in the LS-DYNA environment to validate a fluid model under dynamic loading, based on the container deformations. Finally, the coupling between the amniotic fluid model and an existing finite-element full-body pregnant woman model was validated in terms of pressure. To do so, experimental test results performed on four postmortem human surrogates (PMHS) (in which a physical gravid uterus model was inserted) were used. The experimental intra-uterine pressure from these tests was compared to intra uterine pressure from a numerical simulation performed under the same loading conditions. Both free fall numerical and experimental responses appear strongly correlated. The relationship between the amniotic fluid model and pregnant woman model provide intra-uterine pressure values correlated with the experimental test responses. The use of an Arbitrary Lagrangian Eulerian formulation allows the analysis of FSI between the amniotic fluid and the gravid uterus during a road accident involving pregnant women. PMID:26592419

  12. Development of a Gravid Uterus Model for the Study of Road Accidents Involving Pregnant Women.

    PubMed

    Auriault, F; Thollon, L; Behr, M

    2016-01-01

    Car accident simulations involving pregnant women are well documented in the literature and suggest that intra-uterine pressure could be responsible for the phenomenon of placental abruption, underlining the need for a realistic amniotic fluid model, including fluid-structure interactions (FSI). This study reports the development and validation of an amniotic fluid model using an Arbitrary Lagrangian Eulerian formulation in the LS-DYNA environment. Dedicated to the study of the mechanisms responsible for fetal injuries resulting from road accidents, the fluid model was validated using dynamic loading tests. Drop tests were performed on a deformable water-filled container at acceleration levels that would be experienced in a gravid uterus during a frontal car collision at 25 kph. During the test device braking phase, container deformation induced by inertial effects and FSI was recorded by kinematic analysis. These tests were then simulated in the LS-DYNA environment to validate a fluid model under dynamic loading, based on the container deformations. Finally, the coupling between the amniotic fluid model and an existing finite-element full-body pregnant woman model was validated in terms of pressure. To do so, experimental test results performed on four postmortem human surrogates (PMHS) (in which a physical gravid uterus model was inserted) were used. The experimental intra-uterine pressure from these tests was compared to intra uterine pressure from a numerical simulation performed under the same loading conditions. Both free fall numerical and experimental responses appear strongly correlated. The relationship between the amniotic fluid model and pregnant woman model provide intra-uterine pressure values correlated with the experimental test responses. The use of an Arbitrary Lagrangian Eulerian formulation allows the analysis of FSI between the amniotic fluid and the gravid uterus during a road accident involving pregnant women.

  13. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  14. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  15. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed.

  16. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  17. Enhanced probabilistic microcell prediction model

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2005-06-01

    A microcell is a cell with 1-km or less radius which is suitable not only for heavily urbanized area such as a metropolitan city but also for in-building area such as offices and shopping malls. This paper deals with the microcell prediction model of propagation loss focused on in-buildng solution that is analyzed by probabilistic techniques. The RSL (Receive Signal Level) is the factor which can evaluate the performance of a microcell and the LOS (Line-Of-Sight) component and the blockage loss directly effect on the RSL. Combination of the probabilistic method is applied to get these performance factors. The mathematical methods include the CLT (Central Limit Theorem) and the SSQC (Six-Sigma Quality Control) to get the parameters of the distribution. This probabilistic solution gives us compact measuring of performance factors. In addition, it gives the probabilistic optimization of strategies such as the number of cells, cell location, capacity of cells, range of cells and so on. In addition, the optimal strategies for antenna allocation for a building can be obtained by using this model.

  18. Radionuclides from the Fukushima accident in the air over Lithuania: measurement and modelling approaches.

    PubMed

    Lujanienė, G; Byčenkienė, S; Povinec, P P; Gera, M

    2012-12-01

    Analyses of (131)I, (137)Cs and (134)Cs in airborne aerosols were carried out in daily samples in Vilnius, Lithuania after the Fukushima accident during the period of March-April, 2011. The activity concentrations of (131)I and (137)Cs ranged from 12 μBq/m(3) and 1.4 μBq/m(3) to 3700 μBq/m(3) and 1040 μBq/m(3), respectively. The activity concentration of (239,240)Pu in one aerosol sample collected from 23 March to 15 April, 2011 was found to be 44.5 nBq/m(3). The two maxima found in radionuclide concentrations were related to complicated long-range air mass transport from Japan across the Pacific, the North America and the Atlantic Ocean to Central Europe as indicated by modelling. HYSPLIT backward trajectories and meteorological data were applied for interpretation of activity variations of measured radionuclides observed at the site of investigation. (7)Be and (212)Pb activity concentrations and their ratios were used as tracers of vertical transport of air masses. Fukushima data were compared with the data obtained during the Chernobyl accident and in the post Chernobyl period. The activity concentrations of (131)I and (137)Cs were found to be by 4 orders of magnitude lower as compared to the Chernobyl accident. The activity ratio of (134)Cs/(137)Cs was around 1 with small variations only. The activity ratio of (238)Pu/(239,240)Pu in the aerosol sample was 1.2, indicating a presence of the spent fuel of different origin than that of the Chernobyl accident.

  19. Light-Weight Radioisotope Heater Unit final safety analysis report (LWRHU-FSAR): Volume 2: Accident Model Document (AMD)

    SciTech Connect

    Johnson, E.W.

    1988-10-01

    The purpose of this volume of the LWRHU SAR, the Accident Model Document (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; Provide estimates of occurrence probabilities associated with these various accidents; Evaluate the response of the LWRHU (or its components) to the resultant accident environments; and Associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  20. COMPARING SAFE VS. AT-RISK BEHAVIORAL DATA TO PREDICT ACCIDENTS

    SciTech Connect

    Jeffrey C. Joe

    2001-11-01

    The Safety Observations Achieve Results (SOAR) program at the Idaho National Laboratory (INL) encourages employees to perform in-field observations of each other’s behaviors. One purpose for performing these observations is that it gives the observers the opportunity to correct, if needed, their co-worker’s at-risk work practices and habits (i.e., behaviors). The underlying premise of doing this is that major injuries (e.g., OSHA-recordable events) are prevented from occurring because the lower level at-risk behaviors are identified and corrected before they can propagate into culturally accepted unsafe behaviors that result in injuries or fatalities. However, unlike other observation programs, SOAR also emphasizes positive reinforcement for safe behaviors observed. The underlying premise of doing this is that positive reinforcement of safe behaviors helps establish a strong positive safety culture. Since the SOAR program collects both safe and at-risk leading indicator data, this provides a unique opportunity to assess and compare the two kinds of data in terms of their ability to predict future adverse safety events. This paper describes the results of analyses performed on SOAR data to assess their relative predictive ability. Implications are discussed.

  1. Analysis 320 coal mine accidents using structural equation modeling with unsafe conditions of the rules and regulations as exogenous variables.

    PubMed

    Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun

    2016-07-01

    Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error."

  2. Analysis 320 coal mine accidents using structural equation modeling with unsafe conditions of the rules and regulations as exogenous variables.

    PubMed

    Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun

    2016-07-01

    Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error." PMID:27085591

  3. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    SciTech Connect

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  4. A simplified model for calculating early offsite consequences from nuclear reactor accidents

    SciTech Connect

    Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.

    1988-07-01

    A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs.

  5. Low-power and shutdown models for the accident sequence precursor (ASP) program

    SciTech Connect

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.

    1997-02-01

    The US Nuclear Regulatory Commission (NRC) has been using full-power. Level 1, limited-scope risk models for the Accident Sequence Precursor (ASP) program for over fifteen years. These models have evolved and matured over the years, as have probabilistic risk assessment (PRA) and computer technologies. Significant upgrading activities have been undertaken over the past three years, with involvement from the Offices of Nuclear Reactor Regulation (NRR), Analysis and Evaluation of Operational Data (AEOD), and Nuclear Regulatory Research (RES), and several national laboratories. Part of these activities was an RES-sponsored feasibility study investigating the ability to extend the ASP models to include contributors to core damage from events initiated with the reactor at low power or shutdown (LP/SD), both internal events and external events. This paper presents only the LP/SD internal event modeling efforts.

  6. Predictive models of radiative neutrino masses

    NASA Astrophysics Data System (ADS)

    Julio, J.

    2016-06-01

    We discuss two models of radiative neutrino mass generation. The first model features one-loop Zee model with Z4 symmetry. The second model is the two-loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.

  7. How to Establish Clinical Prediction Models

    PubMed Central

    Bang, Heejung

    2016-01-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice. PMID:26996421

  8. Risk assessment and remedial policy evaluation using predictive modeling

    SciTech Connect

    Linkov, L.; Schell, W.R.

    1996-06-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment.

  9. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  10. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  11. Childhood asthma prediction models: a systematic review.

    PubMed

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  12. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    PubMed

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced. PMID:24679678

  13. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    PubMed

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced.

  14. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  15. Hybrid approaches to physiologic modeling and prediction

    NASA Astrophysics Data System (ADS)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  16. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  17. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  18. Regional long-term model of radioactivity dispersion and fate in the Northwestern Pacific and adjacent seas: application to the Fukushima Dai-ichi accident.

    PubMed

    Maderich, V; Bezhenar, R; Heling, R; de With, G; Jung, K T; Myoung, J G; Cho, Y-K; Qiao, F; Robertson, L

    2014-05-01

    The compartment model POSEIDON-R was modified and applied to the Northwestern Pacific and adjacent seas to simulate the transport and fate of radioactivity in the period 1945-2010, and to perform a radiological assessment on the releases of radioactivity due to the Fukushima Dai-ichi accident for the period 2011-2040. The model predicts the dispersion of radioactivity in the water column and in sediments, the transfer of radionuclides throughout the marine food web, and subsequent doses to humans due to the consumption of marine products. A generic predictive dynamic food-chain model is used instead of the biological concentration factor (BCF) approach. The radionuclide uptake model for fish has as a central feature the accumulation of radionuclides in the target tissue. The three layer structure of the water column makes it possible to describe the vertical structure of radioactivity in deep waters. In total 175 compartments cover the Northwestern Pacific, the East China and Yellow Seas and the East/Japan Sea. The model was validated from (137)Cs data for the period 1945-2010. Calculated concentrations of (137)Cs in water, bottom sediments and marine organisms in the coastal compartment, before and after the accident, are in close agreement with measurements from the Japanese agencies. The agreement for water is achieved when an additional continuous flux of 3.6 TBq y(-1) is used for underground leakage of contaminated water from the Fukushima Dai-ichi NPP, during the three years following the accident. The dynamic food web model predicts that due to the delay of the transfer throughout the food web, the concentration of (137)Cs for piscivorous fishes returns to background level only in 2016. For the year 2011, the calculated individual dose rate for Fukushima Prefecture due to consumption of fishery products is 3.6 μSv y(-1). Following the Fukushima Dai-ichi accident the collective dose due to ingestion of marine products for Japan increased in 2011 by a

  19. Modeling and Predicting Pesticide Exposures

    EPA Science Inventory

    Models provide a means for representing a real system in an understandable way. They take many forms, beginning with conceptual models that explain the way a system works, such as delineation of all the factors and parameters of how a pesticide particle moves in the air after a s...

  20. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  1. Model calculation of radiocaesium transfer into food products in semi-natural forest ecosystems in the Czech Republic after a nuclear reactor accident and an estimate of the population dose burden.

    PubMed

    Svadlenková, M; Konecný, J; Smutný, V

    1996-01-01

    Radioactivity of food products from semi-natural forest ecosystems can contribute appreciably to the radiological burden of the human population following a nuclear accident, as found after the Chernobyl disaster in 1986. In the Czech Republic, radiocaesium radioactivity has been measured since 1986 in various components of forest ecosystems, such as soil, mushrooms, bilberries, deer and boar. In this work, the data are employed to predict how a model accident of the Temelín nuclear power plant in southern Bohemia (which is under construction) would affect selected forest ecosystems in its surroundings. The dose commitment to the critical population group is also estimated.

  2. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of

  3. Recursive modeling of loss of control in human and organizational processes: a systemic model for accident analysis.

    PubMed

    Kontogiannis, Tom; Malakis, Stathis

    2012-09-01

    A recursive model of accident investigation is proposed by exploiting earlier work in systems thinking. Safety analysts can understand better the underlying causes of decision or action flaws by probing into the patterns of breakdown in the organization of safety. For this deeper analysis, a cybernetic model of organizational factors and a control model of human processes have been integrated in this article (i.e., the viable system model and the extended control model). The joint VSM-ECOM framework has been applied to a case study to help safety practitioners with the analysis of patterns of breakdown with regard to how operators and organizations manage goal conflicts, monitor work progress, recognize weak signals, align goals across teams, and adapt plans on the fly. The recursive accident representation brings together several organizational issues (e.g., the dilemma of autonomy versus compliance, or the interaction between structure and strategy) and addresses how operators adapt to challenges in their environment by adjusting their modes of functioning and recovery. Finally, it facilitates the transfer of knowledge from diverse incidents and near misses within similar domains of practice.

  4. Modeling operator actions during a small break loss-of-coolant accident in a Babcock and Wilcox nuclear power plant

    SciTech Connect

    Ghan, L.S.; Ortiz, M.G.

    1991-12-31

    A small break loss-of-accident (SBLOCA) in a typical Babcock and Wilcox (B&W) nuclear power plant was modeled using RELAP5/MOD3. This work was performed as part of the United States Regulatory Commission`s (USNRC) Code, Scaling, Applicability and Uncertainty (CSAU) study. The break was initiated by severing one high pressure injection (HPI) line at the cold leg. Thus, the small break was further aggravated by reduced HPI flow. Comparisons between scoping runs with minimal operator action, and full operator action, clearly showed that the operator plays a key role in recovering the plant. Operator actions were modeled based on the emergency operating procedures (EOPs) and the Technical Bases Document for the EOPs. The sequence of operator actions modeled here is only one of several possibilities. Different sequences of operator actions are possible for a given accident because of the subjective decisions the operator must make when determining the status of the plant, hence, which branch of the EOP to follow. To assess the credibility of the modeled operator actions, these actions and results of the simulated accident scenario were presented to operator examiners who are familiar with B&W nuclear power plants. They agreed that, in general, the modeled operator actions conform to the requirements set forth in the EOPs and are therefore plausible. This paper presents the method for modeling the operator actions and discusses the simulated accident scenario from the viewpoint of operator actions.

  5. Inverse estimation of source parameters of oceanic radioactivity dispersion models associated with the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Miyazawa, Y.; Masumoto, Y.; Varlamov, S. M.; Miyama, T.; Takigawa, M.; Honda, M.; Saino, T.

    2012-10-01

    With combined use of the ocean-atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the Western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP) that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5-5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5-9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the Western North Pacific.

  6. Inverse estimation of source parameters of oceanic radioactivity dispersion models associated with the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Miyazawa, Y.; Masumoto, Y.; Varlamov, S. M.; Miyama, T.; Takigawa, M.; Honda, M.; Saino, T.

    2013-04-01

    With combined use of the ocean-atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP) that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5-5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5-9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the western North Pacific.

  7. Progresses in tritium accident modelling in the frame of IAEA EMRAS II

    SciTech Connect

    Galeriu, D.; Melintescu, A.

    2015-03-15

    The assessment of the environmental impact of tritium release from nuclear facilities is a topic of interest in many countries. In the IAEA's Environmental Modelling for Radiation Safety (EMRAS I) programme, progresses for routine releases were done and in the EMRAS II programme a dedicated working group (WG 7 - Tritium Accidents) focused on the potential accidental releases (liquid and atmospheric pathways). The progresses achieved in WG 7 were included in a complex report - a technical document of IAEA covering both liquid and atmospheric accidental release consequences. A brief description of the progresses achieved in the frame of EMRAS II WG 7 is presented. Important results have been obtained concerning washout rate, the deposition on the soil of HTO and HT, the HTO uptake by leaves and the subsequent conversion to OBT (organically bound tritium) during daylight. Further needs of the processes understanding and the experimental efforts are emphasised.

  8. Predictive Modeling in Adult Education

    ERIC Educational Resources Information Center

    Lindner, Charles L.

    2011-01-01

    The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…

  9. Pancreatic Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Colorectal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Bladder Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Testicular Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Lung Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Ovarian Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Liver Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Prostate Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Esophageal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Cervical Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Breast Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation

    SciTech Connect

    Evans, J.S. . School of Public Health)

    1990-01-01

    This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.

  1. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  2. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  3. A Course in... Model Predictive Control.

    ERIC Educational Resources Information Center

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  4. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  5. TRACE/PARCS Core Modeling of a BWR/5 for Accident Analysis of ATWS Events

    SciTech Connect

    Cuadra A.; Baek J.; Cheng, L.; Aronson, A.; Diamond, D.; Yarsky, P.

    2013-11-10

    The TRACE/PARCS computational package [1, 2] isdesigned to be applicable to the analysis of light water reactor operational transients and accidents where the coupling between the neutron kinetics (PARCS) and the thermal-hydraulics and thermal-mechanics (TRACE) is important. TRACE/PARCS has been assessed for itsapplicability to anticipated transients without scram(ATWS) [3]. The challenge, addressed in this study, is to develop a sufficiently rigorous input model that would be acceptable for use in ATWS analysis. Two types of ATWS events were of interest, a turbine trip and a closure of main steam isolation valves (MSIVs). In the first type, initiated by turbine trip, the concern is that the core will become unstable and large power oscillations will occur. In the second type,initiated by MSIV closure,, the concern is the amount of energy being placed into containment and the resulting emergency depressurization. Two separate TRACE/PARCS models of a BWR/5 were developed to analyze these ATWS events at MELLLA+ (maximum extended load line limit plus)operating conditions. One model [4] was used for analysis of ATWS events leading to instability (ATWS-I);the other [5] for ATWS events leading to emergency depressurization (ATWS-ED). Both models included a large portion of the nuclear steam supply system and controls, and a detailed core model, presented henceforth.

  6. An interactive framework for developing simulation models of hospital accident and emergency services.

    PubMed

    Codrington-Virtue, Anthony; Whittlestone, Paul; Kelly, John; Chaussalet, Thierry

    2005-01-01

    Discrete-event simulation can be a valuable tool in modelling health care systems. This paper describes an interactive framework to model and simulate a hospital accident and emergency department. An interactive spreadsheet (Excel) facilitated the user-friendly input of data such as patient pathways, arrival times, service times and resources into the discrete event simulation package (SIMUL8). The framework was enhanced further by configuring SIMUL8 to visually show patient flow and activity on a schematic plan of an A&E. The patient flow and activity information included patient icons flowing along A&E corridors and pathways, processes undertaken in A&E work areas and queue activity. One major benefit of visually showing patient flow and activity was that modellers and decision makers could visually gain a dynamic insight into the performance of the overall system and visually see changes over the model run cycle. Another key benefit of the interactive framework was the ability to quickly and easily change model parameters to trial, test and compare different scenarios.

  7. Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.

    NASA Astrophysics Data System (ADS)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito

    2016-04-01

    The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the

  8. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes.

  9. Oil Spill Detection and Modelling: Preliminary Results for the Cercal Accident

    NASA Astrophysics Data System (ADS)

    da Costa, R. T.; Azevedo, A.; da Silva, J. C. B.; Oliveira, A.

    2013-03-01

    Oil spill research has significantly increased mainly as a result of the severe consequences experienced from industry accidents. Oil spill models are currently able to simulate the processes that determine the fate of oil slicks, playing an important role in disaster prevention, control and mitigation, generating valuable information for decision makers and the population in general. On the other hand, satellite Synthetic Aperture Radar (SAR) imagery has demonstrated significant potential in accidental oil spill detection, when they are accurately differentiated from look-alikes. The combination of both tools can lead to breakthroughs, particularly in the development of Early Warning Systems (EWS). This paper presents a hindcast simulation of the oil slick resulting from the Motor Tanker (MT) Cercal oil spill, listed by the Portuguese Navy as one of the major oil spills in the Portuguese Atlantic Coast. The accident took place nearby Leix˜oes Harbour, North of the Douro River, Porto (Portugal) on the 2nd of October 1994. The oil slick was segmented from available European Remote Sensing (ERS) satellite SAR images, using an algorithm based on a simplified version of the K-means clustering formulation. The image-acquired information, added to the initial conditions and forcings, provided the necessary inputs for the oil spill model. Simulations were made considering the tri-dimensional hydrodynamics in a crossscale domain, from the interior of the Douro River Estuary to the open-ocean on the Iberian Atlantic shelf. Atmospheric forcings (from ECMWF - the European Centre for Medium-Range Weather Forecasts and NOAA - the National Oceanic and Atmospheric Administration), river forcings (from SNIRH - the Portuguese National Information System of the Hydric Resources) and tidal forcings (from LNEC - the National Laboratory for Civil Engineering), including baroclinic gradients (NOAA), were considered. The lack of data for validation purposes only allowed the use of the

  10. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Pilsner, B. H.; Hillery, R. V.; Mcknight, R. L.; Cook, T. S.; Kim, K. S.; Duderstadt, E. C.

    1986-01-01

    The objectives of this program are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, and then to develop and verify life prediction models accounting for these degradation modes. The program is divided into two phases, each consisting of several tasks. The work in Phase 1 is aimed at identifying the relative importance of the various failure modes, and developing and verifying life prediction model(s) for the predominant model for a thermal barrier coating system. Two possible predominant failure mechanisms being evaluated are bond coat oxidation and bond coat creep. The work in Phase 2 will develop design-capable, causal, life prediction models for thermomechanical and thermochemical failure modes, and for the exceptional conditions of foreign object damage and erosion.

  11. Posterior predictive Bayesian phylogenetic model selection.

    PubMed

    Lewis, Paul O; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-05-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand-Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. PMID:24193892

  12. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  13. Accidents and Decision Making under Uncertainty: A Comparison of Four Models.

    PubMed

    Barkan; Zohar; Erev

    1998-05-01

    Heinrich's (1931) classical study implies that most industrial accidents can be characterized as a probabilistic result of human error. The present research quantifies Heinrich's observation and compares four descriptive models of decision making in the abstracted setting. The suggested quantification utilizes signal detection theory (Green & Swets, 1966). It shows that Heinrich's observation can be described as a probabilistic signal detection task. In a controlled experiment, 90 decision makers participated in 600 trials of six safety games. Each safety game was a numerical example of the probabilistic SDT abstraction of Heinrich's proposition. Three games were designed under a frame of gain to represent perception of safe choice as costless, while the other three were designed under a frame of loss to represent perception of safe choice as costly. Probabilistic penalty for Miss was given at three different levels (1, .5, .1). The results showed that decisions tended initially to be risky and that experience led to safer behavior. As the probability of being penalized was lowered decisions became riskier and the learning process was impaired. The results support the cutoff reinforcement learning model suggested by Erev et al. (1995). The hill-climbing learning model (Busemeyer & Myung, 1992) was partially supported. Theoretical and practical implications are discussed. Copyright 1998 Academic Press.

  14. Solar Weather Event Modelling and Prediction

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Zuccarello, Francesca; Guglielmino, Salvatore L.; Bothmer, Volker; Lilensten, Jean; Noci, Giancarlo; Storini, Marisa; Lundstedt, Henrik

    2009-11-01

    Key drivers of solar weather and mid-term solar weather are reviewed by considering a selection of relevant physics- and statistics-based scientific models as well as a selection of related prediction models, in order to provide an updated operational scenario for space weather applications. The characteristics and outcomes of the considered scientific and prediction models indicate that they only partially cope with the complex nature of solar activity for the lack of a detailed knowledge of the underlying physics. This is indicated by the fact that, on one hand, scientific models based on chaos theory and non-linear dynamics reproduce better the observed features, and, on the other hand, that prediction models based on statistics and artificial neural networks perform better. To date, the solar weather prediction success at most time and spatial scales is far from being satisfactory, but the forthcoming ground- and space-based high-resolution observations can add fundamental tiles to the modelling and predicting frameworks as well as the application of advanced mathematical approaches in the analysis of diachronic solar observations, that are a must to provide comprehensive and homogeneous data sets.

  15. Prediction of groundwater contamination with 137Cs and 131I from the Fukushima nuclear accident in the Kanto district.

    PubMed

    Ohta, Tomoko; Mahara, Yasunori; Kubota, Takumi; Fukutani, Satoshi; Fujiwara, Keiko; Takamiya, Koichi; Yoshinaga, Hisao; Mizuochi, Hiroyuki; Igarashi, Toshifumi

    2012-09-01

    We measured the concentrations of (131)I, (134)Cs, and (137)Cs released from the Fukushima nuclear accident in soil and rainwater samples collected March 30-31, 2011, in Ibaraki Prefecture, Kanto district, bordering Fukushima Prefecture to the south. Column experiments revealed that all (131)I in rainwater samples was adsorbed onto an anion-exchange resin. However, 30% of (131)I was not retained by the resin after it passed through a soil layer, suggesting that a portion of (131)I became bound to organic matter from the soil. The (137)Cs migration rate was estimated to be approximately 0.6 mm/y in the Kanto area, which indicates that contamination of groundwater by (137)Cs is not likely to occur in rainwater infiltrating into the surface soil after the Fukushima accident.

  16. Mathematical model for predicting human vertebral fracture

    NASA Technical Reports Server (NTRS)

    Benedict, J. V.

    1973-01-01

    Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.

  17. A Predictive Model of Inquiry to Enrollment

    ERIC Educational Resources Information Center

    Goenner, Cullen F.; Pauls, Kenton

    2006-01-01

    The purpose of this paper is to build a predictive model of enrollment that provides data driven analysis to improve undergraduate recruitment efforts. We utilize an inquiry model, which examines the enrollment decisions of students that have made contact with our institution, a medium sized, public, Doctoral I university. A student, who makes an…

  18. Prediction of the relative activity levels of the actinides in a fallout from a nuclear reactor accident.

    PubMed

    Friberg, I

    1999-02-01

    The relative activities of the actinides that can be expected in a fresh fallout from a nuclear reactor (BWR, PWR, RBMK) accident have been estimated from fuel composition calculations. The results can be used to (1) adapt analytical methods to better suit emergency situations, (2) estimate the activity levels of radionuclides not measured and (3) estimate the relative activities of nuclides in unresolved alpha-peaks. The latter two can be applied to investigations concerning the Chernobyl fallout, in addition to emergency situations.

  19. Assessing calibration of multinomial risk prediction models.

    PubMed

    Van Hoorde, Kirsten; Vergouwe, Yvonne; Timmerman, Dirk; Van Huffel, Sabine; Steyerberg, Ewout W; Van Calster, Ben

    2014-07-10

    Calibration, that is, whether observed outcomes agree with predicted risks, is important when evaluating risk prediction models. For dichotomous outcomes, several tools exist to assess different aspects of model calibration, such as calibration-in-the-large, logistic recalibration, and (non-)parametric calibration plots. We aim to extend these tools to prediction models for polytomous outcomes. We focus on models developed using multinomial logistic regression (MLR): outcome Y with k categories is predicted using k - 1 equations comparing each category i (i = 2, … ,k) with reference category 1 using a set of predictors, resulting in k - 1 linear predictors. We propose a multinomial logistic recalibration framework that involves an MLR fit where Y is predicted using the k - 1 linear predictors from the prediction model. A non-parametric alternative may use vector splines for the effects of the linear predictors. The parametric and non-parametric frameworks can be used to generate multinomial calibration plots. Further, the parametric framework can be used for the estimation and statistical testing of calibration intercepts and slopes. Two illustrative case studies are presented, one on the diagnosis of malignancy of ovarian tumors and one on residual mass diagnosis in testicular cancer patients treated with cisplatin-based chemotherapy. The risk prediction models were developed on data from 2037 and 544 patients and externally validated on 1107 and 550 patients, respectively. We conclude that calibration tools can be extended to polytomous outcomes. The polytomous calibration plots are particularly informative through the visual summary of the calibration performance.

  20. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  1. Are animal models predictive for humans?

    PubMed Central

    2009-01-01

    It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics. PMID:19146696

  2. Predictive models of implicit and explicit attitudes.

    PubMed

    Perugini, Marco

    2005-03-01

    Explicit attitudes have long been assumed to be central factors influencing behaviour. A recent stream of studies has shown that implicit attitudes, typically measured with the Implicit Association Test (IAT), can also predict a significant range of behaviours. This contribution is focused on testing different predictive models of implicit and explicit attitudes. In particular, three main models can be derived from the literature: (a) additive (the two types of attitudes explain different portion of variance in the criterion), (b) double dissociation (implicit attitudes predict spontaneous whereas explicit attitudes predict deliberative behaviour), and (c) multiplicative (implicit and explicit attitudes interact in influencing behaviour). This paper reports two studies testing these models. The first study (N = 48) is about smoking behaviour, whereas the second study (N = 109) is about preferences for snacks versus fruit. In the first study, the multiplicative model is supported, whereas the double dissociation model is supported in the second study. The results are discussed in light of the importance of focusing on different patterns of prediction when investigating the directive influence of implicit and explicit attitudes on behaviours. PMID:15901390

  3. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations. PMID:24182910

  4. World Meteorological Organization's model simulations of the radionuclide dispersion and deposition from the Fukushima Daiichi nuclear power plant accident.

    PubMed

    Draxler, Roland; Arnold, Dèlia; Chino, Masamichi; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, René; Shimbori, Toshiki; Solazzo, Efisio; Wotawa, Gerhard

    2015-01-01

    Five different atmospheric transport and dispersion model's (ATDM) deposition and air concentration results for atmospheric releases from the Fukushima Daiichi nuclear power plant accident were evaluated over Japan using regional (137)Cs deposition measurements and (137)Cs and (131)I air concentration time series at one location about 110 km from the plant. Some of the ATDMs used the same and others different meteorological data consistent with their normal operating practices. There were four global meteorological analyses data sets available and two regional high-resolution analyses. Not all of the ATDMs were able to use all of the meteorological data combinations. The ATDMs were configured identically as much as possible with respect to the release duration, release height, concentration grid size, and averaging time. However, each ATDM retained its unique treatment of the vertical velocity field and the wet and dry deposition, one of the largest uncertainties in these calculations. There were 18 ATDM-meteorology combinations available for evaluation. The deposition results showed that even when using the same meteorological analysis, each ATDM can produce quite different deposition patterns. The better calculations in terms of both deposition and air concentration were associated with the smoother ATDM deposition patterns. The best model with respect to the deposition was not always the best model with respect to air concentrations. The use of high-resolution mesoscale analyses improved ATDM performance; however, high-resolution precipitation analyses did not improve ATDM predictions. Although some ATDMs could be identified as better performers for either deposition or air concentration calculations, overall, the ensemble mean of a subset of better performing members provided more consistent results for both types of calculations.

  5. Statistical regional calibration of subsidence prediction models

    SciTech Connect

    Cleaver, D.N.; Reddish, D.J.; Dunham, R.K.; Shadbolt, C.H.

    1995-11-01

    Like other influence function methods, the SWIFT subsidence prediction program, developed within the Mineral Resources Engineering Department at the University of Nottingham, requires calibration to regional data in order to produce accurate predictions of ground movements. Previously, this software had been solely calibrated to give results consistent with the Subsidence Engineer`s Handbook (NCB, 1975). This approach was satisfactory for the majority of cases based in the United Kingdom, upon which the calibration was based. However, in certain circumstances within the UK and, almost always, in overseas case studies, the predictions die no correspond to observed patterns of ground movement. Therefore, in order that SWIFT, and other subsidence prediction packages, can be considered more universal, an improved and adaptable method of regional calibration must be incorporated. This paper describes the analysis of a large database of case histories from the UK industry and international publications. Observed maximum subsidence, mining geometry and Geological Index for several hundred cases have been statistically analyzed in terms of developing prediction models. The models developed can more accurately predict maximum subsidence than previously used systems but also, are capable of indicating the likely range of prediction error to a certain degree of probability. Finally, the paper illustrates how this statistical approach can be incorporated as a calibration system for the influence function program, SWIFT.

  6. Predictive capability of chlorination disinfection byproducts models.

    PubMed

    Ged, Evan C; Chadik, Paul A; Boyer, Treavor H

    2015-02-01

    There are over 100 models that have been developed for predicting trihalomethanes (THMs), haloacetic acids (HAAs), bromate, and unregulated disinfection byproducts (DBPs). Until now no publication has evaluated the variability of previous THM and HAA models using a common data set. In this article, the standard error (SE), Marquardt's percent standard deviation (MPSD), and linear coefficient of determination (R(2)) were used to analyze the variability of 87 models from 23 different publications. The most robust models were capable of predicting THM4 with an SE of 48 μg L(-1) and HAA6 with an SE of 15 μg L(-1), both achieving R(2) > 0.90. The majority of models were formulated for THM4. There is a lack of published models evaluating total HAAs, individual THM and HAA species, bromate, and unregulated DBPs.

  7. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  8. Using Numerical Models in the Development of Software Tools for Risk Management of Accidents with Oil and Inert Spills

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.

    2012-04-01

    The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt

  9. Predicting freakish sea state with an operational third generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2013-11-01

    Understanding of freak wave generation mechanism has advanced and the community has reached to a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hind-casted using both the DIA and SRIAM nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited down-shifting at the accident site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  10. Predicting freakish sea state with an operational third-generation wave model

    NASA Astrophysics Data System (ADS)

    Waseda, T.; In, K.; Kiyomatsu, K.; Tamura, H.; Miyazawa, Y.; Iyama, K.

    2014-04-01

    The understanding of freak wave generation mechanisms has advanced and the community has reached a consensus that spectral geometry plays an important role. Numerous marine accident cases were studied and revealed that the narrowing of the directional spectrum is a good indicator of dangerous sea. However, the estimation of the directional spectrum depends on the performance of the third-generation wave model. In this work, a well-studied marine accident case in Japan in 1980 (Onomichi-Maru incident) is revisited and the sea states are hindcasted using both the DIA (discrete interaction approximation) and SRIAM (Simplified Research Institute of Applied Mechanics) nonlinear source terms. The result indicates that the temporal evolution of the basic parameters (directional spreading and frequency bandwidth) agree reasonably well between the two schemes and therefore the most commonly used DIA method is qualitatively sufficient to predict freakish sea state. The analyses revealed that in the case of Onomichi-Maru, a moving gale system caused the spectrum to grow in energy with limited downshifting at the accident's site. This conclusion contradicts the marine inquiry report speculating that the two swell systems crossed at the accident's site. The unimodal wave system grew under strong influence of local wind with a peculiar energy transfer.

  11. Key factors contributing to accident severity rate in construction industry in Iran: a regression modelling approach.

    PubMed

    Soltanzadeh, Ahmad; Mohammadfam, Iraj; Moghimbeigi, Abbas; Ghiasvand, Reza

    2016-03-01

    Construction industry involves the highest risk of occupational accidents and bodily injuries, which range from mild to very severe. The aim of this cross-sectional study was to identify the factors associated with accident severity rate (ASR) in the largest Iranian construction companies based on data about 500 occupational accidents recorded from 2009 to 2013. We also gathered data on safety and health risk management and training systems. Data were analysed using Pearson's chi-squared coefficient and multiple regression analysis. Median ASR (and the interquartile range) was 107.50 (57.24- 381.25). Fourteen of the 24 studied factors stood out as most affecting construction accident severity (p<0.05). These findings can be applied in the design and implementation of a comprehensive safety and health risk management system to reduce ASR. PMID:27092639

  12. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Ahmad, Nash'at N.; Holzaepfel, Frank; VanValkenburg, Randal L.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  13. Alcator C-Mod predictive modeling

    NASA Astrophysics Data System (ADS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-10-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles.

  14. An Online Adaptive Model for Location Prediction

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Theodoros; Anagnostopoulos, Christos; Hadjiefthymiades, Stathes

    Context-awareness is viewed as one of the most important aspects in the emerging pervasive computing paradigm. Mobile context-aware applications are required to sense and react to changing environment conditions. Such applications, usually, need to recognize, classify and predict context in order to act efficiently, beforehand, for the benefit of the user. In this paper, we propose a mobility prediction model, which deals with context representation and location prediction of moving users. Machine Learning (ML) techniques are used for trajectory classification. Spatial and temporal on-line clustering is adopted. We rely on Adaptive Resonance Theory (ART) for location prediction. Location prediction is treated as a context classification problem. We introduce a novel classifier that applies a Hausdorff-like distance over the extracted trajectories handling location prediction. Since our approach is time-sensitive, the Hausdorff distance is considered more advantageous than a simple Euclidean norm. A learning method is presented and evaluated. We compare ART with Offline kMeans and Online kMeans algorithms. Our findings are very promising for the use of the proposed model in mobile context aware applications.

  15. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  16. Modelling language evolution: Examples and predictions

    NASA Astrophysics Data System (ADS)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  17. Combining Modeling and Gaming for Predictive Analytics

    SciTech Connect

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describe our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.

  18. DKIST Polarization Modeling and Performance Predictions

    NASA Astrophysics Data System (ADS)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  19. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  20. Predictive analytics can support the ACO model.

    PubMed

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  1. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  2. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Sheffler, K. D.

    1985-01-01

    The objective is to develop an integrated life prediction model accounting for all potential life-limiting thermal barrier coating (TBC) degradation and failure modes, including spallation resulting from cyclic thermal stress, oxidation degradation, hot corrosion, erosion and foreign object damage.

  3. A Predictive Model for MSSW Student Success

    ERIC Educational Resources Information Center

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  4. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  5. Lag Model Predictions for UFAST SBLI Flowfield

    NASA Technical Reports Server (NTRS)

    Olsen, Mike; Lillard, Randy; Oliver, Brandon; Blaisdell, Gregory

    2010-01-01

    Presentation for Shock Boundary Layer Interaction Workshop. Shows results for Lag turbulence model on one of the international workshop test cases the UFAST 8 degree test case. Comparison with PIV velocity measurements as well as computed tunnel wall flowfields are shown, emphasizing that the interaction is a 3D phenomena, and is reasonably well predicted.

  6. Nearshore Operational Model for Rip Current Predictions

    NASA Astrophysics Data System (ADS)

    Sembiring, L. E.; Van Dongeren, A. R.; Van Ormondt, M.; Winter, G.; Roelvink, J.

    2012-12-01

    A coastal operational model system can serve as a tool in order to monitor and predict coastal hazards, and to acquire up-to-date information on coastal state indicators. The objective of this research is to develop a nearshore operational model system for the Dutch coast focusing on swimmer safety. For that purpose, an operational model system has been built which can predict conditions up to 48 hours ahead. The model system consists of three different nested model domain covering The North Sea, The Dutch coastline, and one local model which is the area of interest. Three different process-based models are used to simulate physical processes within the system: SWAN to simulate wave propagation, Delft3D-Flow for hydraulics flow simulation, and XBeach for the nearshore models. The SWAN model is forced by wind fields from operational HiRLAM, as well as two dimensional wave spectral data from WaveWatch 3 Global as the ocean boundaries. The Delft3D Flow model is forced by assigning the boundaries with tidal constants for several important astronomical components as well as HiRLAM wind fields. For the local XBeach model, up-to-date bathymetry will be obtained by assimilating model computation and Argus video data observation. A hindcast is carried out on the Continental Shelf Model, covering the North Sea and nearby Atlantic Ocean, for the year 2009. Model skills are represented by several statistical measures such as rms error and bias. In general the results show that the model system exhibits a good agreement with field data. For SWAN results, integral significant wave heights are predicted well by the model for all wave buoys considered, with rms errors ranging from 0.16 m for the month of May with observed mean significant wave height of 1.08 m, up to rms error of 0.39 m for the month of November, with observed mean significant wave height of 1.91 m. However, it is found that the wave model slightly underestimates the observation for the period of June, especially

  7. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    PubMed

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    than those of the other rivers. Annual sediment outflows from the Abukuma River and the total from the other 13 river basins were calculated as 3.2 × 10(4)-3.1 × 10(5) and 3.4 × 10(4)-2.1 × 10(5)ty(-1), respectively. The values vary between calculation cases because of the critical shear stress, the rainfall factor, and other differences. On the other hand, contributions of those parameters were relatively small for (137)Cs concentration within transported soil. This indicates that the total amount of (137)Cs outflow into the ocean would mainly be controlled by the amount of soil erosion and transport and the total amount of (137)Cs concentration remaining within the basin. Outflows of (137)Cs from the Abukuma River and the total from the other 13 river basins during the first year after the accident were calculated to be 2.3 × 10(11)-3.7 × 10(12) and 4.6 × 10(11)-6.5 × 10(12)Bqy(-1), respectively. The former results were compared with the field investigation results, and the order of magnitude was matched between the two, but the value of the investigation result was beyond the upper limit of model prediction.

  8. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    PubMed

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    than those of the other rivers. Annual sediment outflows from the Abukuma River and the total from the other 13 river basins were calculated as 3.2 × 10(4)-3.1 × 10(5) and 3.4 × 10(4)-2.1 × 10(5)ty(-1), respectively. The values vary between calculation cases because of the critical shear stress, the rainfall factor, and other differences. On the other hand, contributions of those parameters were relatively small for (137)Cs concentration within transported soil. This indicates that the total amount of (137)Cs outflow into the ocean would mainly be controlled by the amount of soil erosion and transport and the total amount of (137)Cs concentration remaining within the basin. Outflows of (137)Cs from the Abukuma River and the total from the other 13 river basins during the first year after the accident were calculated to be 2.3 × 10(11)-3.7 × 10(12) and 4.6 × 10(11)-6.5 × 10(12)Bqy(-1), respectively. The former results were compared with the field investigation results, and the order of magnitude was matched between the two, but the value of the investigation result was beyond the upper limit of model prediction. PMID:24836353

  9. PREDICTIVE MODELING OF CHOLERA OUTBREAKS IN BANGLADESH

    PubMed Central

    Koepke, Amanda A.; Longini, Ira M.; Halloran, M. Elizabeth; Wakefield, Jon; Minin, Vladimir N.

    2016-01-01

    Despite seasonal cholera outbreaks in Bangladesh, little is known about the relationship between environmental conditions and cholera cases. We seek to develop a predictive model for cholera outbreaks in Bangladesh based on environmental predictors. To do this, we estimate the contribution of environmental variables, such as water depth and water temperature, to cholera outbreaks in the context of a disease transmission model. We implement a method which simultaneously accounts for disease dynamics and environmental variables in a Susceptible-Infected-Recovered-Susceptible (SIRS) model. The entire system is treated as a continuous-time hidden Markov model, where the hidden Markov states are the numbers of people who are susceptible, infected, or recovered at each time point, and the observed states are the numbers of cholera cases reported. We use a Bayesian framework to fit this hidden SIRS model, implementing particle Markov chain Monte Carlo methods to sample from the posterior distribution of the environmental and transmission parameters given the observed data. We test this method using both simulation and data from Mathbaria, Bangladesh. Parameter estimates are used to make short-term predictions that capture the formation and decline of epidemic peaks. We demonstrate that our model can successfully predict an increase in the number of infected individuals in the population weeks before the observed number of cholera cases increases, which could allow for early notification of an epidemic and timely allocation of resources. PMID:27746850

  10. Modeling and analysis of the unprotected loss-of-flow accident in the Clinch River Breeder Reactor

    SciTech Connect

    Morris, E.E.; Dunn, F.E.; Simms, R.; Gruber, E.E.

    1985-01-01

    The influence of fission-gas-driven fuel compaction on the energetics resulting from a loss-of-flow accident was estimated with the aid of the SAS3D accident analysis code. The analysis was carried out as part of the Clinch River Breeder Reactor licensing process. The TREAT tests L6, L7, and R8 were analyzed to assist in the modeling of fuel motion and the effects of plenum fission-gas release on coolant and clad dynamics. Special, conservative modeling was introduced to evaluate the effect of fission-gas pressure on the motion of the upper fuel pin segment following disruption. For the nominal sodium-void worth, fission-gas-driven fuel compaction did not adversely affect the outcome of the transient. When uncertainties in the sodium-void worth were considered, however, it was found that if fuel compaction occurs, loss-of-flow driven transient overpower phenomenology could not be precluded.

  11. Validation and verification of the ICRP biokinetic model of 32P: the criticality accident at Tokai-Mura, Japan.

    PubMed

    Miyamoto, K; Takeda, H; Nishimura, Y; Yukawa, M; Watanabe, Y; Ishigure, N; Kouno, F; Kuroda, N; Akashi, M

    2003-01-01

    Regrettably, a criticality accident occurred at a uranium conversion facility in Tokai-mura, Ibaraki, Japan, on 30 September 1999. Radioactivities of 32P in urine, blood and bone samples of the victims, who were severely exposed to neutrons, were measured. 32P was induced in their whole bodies at the moment of the first nuclear release by the reaction 31P (n, gamma) 32P and 32S (n, p) 32P. A realistic biokinetic model was assumed, as the exchange of 32P between the extracellular fluid compartment and the soft tissue compartment occurs only through the intracellular compartment, and the model was used for preliminary calculations. Some acute excretion of 32P, caused by decomposition or elution of tissues which occurred at the time of the accident, may have happened in the victims' bodies in the first few days. The working hypotheses in the present work should initiate renewed discussion of 32P biokinetics. PMID:14526956

  12. Disease prediction models and operational readiness.

    PubMed

    Corley, Courtney D; Pullum, Laura L; Hartley, David M; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M; Lancaster, Mary J

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  13. Disease Prediction Models and Operational Readiness

    PubMed Central

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  14. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical

  15. Can contaminant transport models predict breakthrough?

    USGS Publications Warehouse

    Peng, Wei-Shyuan; Hampton, Duane R.; Konikow, Leonard F.; Kambham, Kiran; Benegar, Jeffery J.

    2000-01-01

    A solute breakthrough curve measured during a two-well tracer test was successfully predicted in 1986 using specialized contaminant transport models. Water was injected into a confined, unconsolidated sand aquifer and pumped out 125 feet (38.3 m) away at the same steady rate. The injected water was spiked with bromide for over three days; the outflow concentration was monitored for a month. Based on previous tests, the horizontal hydraulic conductivity of the thick aquifer varied by a factor of seven among 12 layers. Assuming stratified flow with small dispersivities, two research groups accurately predicted breakthrough with three-dimensional (12-layer) models using curvilinear elements following the arc-shaped flowlines in this test. Can contaminant transport models commonly used in industry, that use rectangular blocks, also reproduce this breakthrough curve? The two-well test was simulated with four MODFLOW-based models, MT3D (FD and HMOC options), MODFLOWT, MOC3D, and MODFLOW-SURFACT. Using the same 12 layers and small dispersivity used in the successful 1986 simulations, these models fit almost as accurately as the models using curvilinear blocks. Subtle variations in the curves illustrate differences among the codes. Sensitivities of the results to number and size of grid blocks, number of layers, boundary conditions, and values of dispersivity and porosity are briefly presented. The fit between calculated and measured breakthrough curves degenerated as the number of layers and/or grid blocks decreased, reflecting a loss of model predictive power as the level of characterization lessened. Therefore, the breakthrough curve for most field sites can be predicted only qualitatively due to limited characterization of the hydrogeology and contaminant source strength.

  16. A fast long-range transport model for operational use in episode simulation. Application to the Chernobyl accident

    NASA Astrophysics Data System (ADS)

    Bonelli, P.; Calori, G.; Finzi, G.

    A simple Lagrangian puff trajectory model and its software implementation, STRALE, are described. Standard meteorological data are used as input for the simulation of the three-dimensional atmospheric transport and dispersion of a pollutant released by a point source. The schemes adopted to describe the vertical diffusion and the interaction with the mixing layer are discussed on the basis of the comparison between simulated and measured 137Cs activities for the Chernobyl nuclear accident.

  17. WASTE-ACC: A computer model for analysis of waste management accidents

    SciTech Connect

    Nabelssi, B.K.; Folga, S.; Kohout, E.J.; Mueller, C.J.; Roglans-Ribas, J.

    1996-12-01

    In support of the U.S. Department of Energy`s (DOE`s) Waste Management Programmatic Environmental Impact Statement, Argonne National Laboratory has developed WASTE-ACC, a computational framework and integrated PC-based database system, to assess atmospheric releases from facility accidents. WASTE-ACC facilitates the many calculations for the accident analyses necessitated by the numerous combinations of waste types, waste management process technologies, facility locations, and site consolidation strategies in the waste management alternatives across the DOE complex. WASTE-ACC is a comprehensive tool that can effectively test future DOE waste management alternatives and assumptions. The computational framework can access several relational databases to calculate atmospheric releases. The databases contain throughput volumes, waste profiles, treatment process parameters, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses. This report describes the computational framework and supporting databases used to conduct accident analyses and to develop source terms to assess potential health impacts that may affect on-site workers and off-site members of the public under various DOE waste management alternatives.

  18. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.

    1984-01-01

    In order to fully exploit thermal barrier coatings (TBCs) on turbine components and achieve the maximum performance benefit, the knowledge and understanding of TBC failure mechanisms must be increased and the means to predict coating life developed. The proposed program will determine the predominant modes of TBC system degradation and then develop and verify life prediction models accounting for those degradation modes. The successful completion of the program will have dual benefits: the ability to take advantage of the performance benefits offered by TBCs, and a sounder basis for making future improvements in coating behavior.

  19. Illuminating Flash Point: Comprehensive Prediction Models.

    PubMed

    Le, Tu C; Ballard, Mathew; Casey, Phillip; Liu, Ming S; Winkler, David A

    2015-01-01

    Flash point is an important property of chemical compounds that is widely used to evaluate flammability hazard. However, there is often a significant gap between the demand for experimental flash point data and their availability. Furthermore, the determination of flash point is difficult and costly, particularly for some toxic, explosive, or radioactive compounds. The development of a reliable and widely applicable method to predict flash point is therefore essential. In this paper, the construction of a quantitative structure - property relationship model with excellent performance and domain of applicability is reported. It uses the largest data set to date of 9399 chemically diverse compounds, with flash point spanning from less than -130 °C to over 900 °C. The model employs only computed parameters, eliminating the need for experimental data that some earlier computational models required. The model allows accurate prediction of flash point for a broad range of compounds that are unavailable or not yet synthesized. This single model with a very broad range of chemical and flash point applicability will allow accurate predictions of this important property to be made for a broad range of new materials. PMID:27490859

  20. Genetic models of homosexuality: generating testable predictions.

    PubMed

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  1. ENSO Prediction using Vector Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  2. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  3. Prediction failure of a wolf landscape model

    USGS Publications Warehouse

    Mech, L.D.

    2006-01-01

    I compared 101 wolf (Canis lupus) pack territories formed in Wisconsin during 1993-2004 to the logistic regression predictive model of Mladenoff et al. (1995, 1997, 1999). Of these, 60% were located in putative habitat suitabilities 50% remained unoccupied by known packs after 24 years of recolonization. This model was a poor predictor of wolf re-colonizing locations in Wisconsin, apparently because it failed to consider the adaptability of wolves. Such models should be used cautiously in wolf-management or restoration plans.

  4. A GIS-based prediction and assessment system of off-site accident consequence for Guangdong nuclear power plant.

    PubMed

    Wang, X Y; Qu, J Y; Shi, Z Q; Ling, Y S

    2003-01-01

    GNARD (Guangdong Nuclear Accident Real-time Decision support system) is a decision support system for off-site emergency management in the event of an accidental release from the nuclear power plants located in Guangdong province, China. The system is capable of calculating wind field, concentrations of radionuclide in environmental media and radiation doses. It can also estimate the size of the area where protective actions should be taken and provide other information about population distribution and emergency facilities available in the area. Furthermore, the system can simulate and evaluate the effectiveness of countermeasures assumed and calculate averted doses by protective actions. All of the results can be shown and analysed on the platform of a geographical information system (GIS).

  5. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    SciTech Connect

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.; Jansen, K. E.; Antal, S. P.; Podowski, M. Z.

    2012-07-01

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approach to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)

  6. Persistence of airline accidents.

    PubMed

    Barros, Carlos Pestana; Faria, Joao Ricardo; Gil-Alana, Luis Alberiko

    2010-10-01

    This paper expands on air travel accident research by examining the relationship between air travel accidents and airline traffic or volume in the period from 1927-2006. The theoretical model is based on a representative airline company that aims to maximise its profits, and it utilises a fractional integration approach in order to determine whether there is a persistent pattern over time with respect to air accidents and air traffic. Furthermore, the paper analyses how airline accidents are related to traffic using a fractional cointegration approach. It finds that airline accidents are persistent and that a (non-stationary) fractional cointegration relationship exists between total airline accidents and airline passengers, airline miles and airline revenues, with shocks that affect the long-run equilibrium disappearing in the very long term. Moreover, this relation is negative, which might be due to the fact that air travel is becoming safer and there is greater competition in the airline industry. Policy implications are derived for countering accident events, based on competition and regulation.

  7. Failure behavior of internally pressurized flawed and unflawed steam generator tubing at high temperatures -- Experiments and comparison with model predictions

    SciTech Connect

    Majumdar, S.; Shack, W.J.; Diercks, D.R.; Mruk, K.; Franklin, J.; Knoblich, L.

    1998-03-01

    This report summarizes experimental work performed at Argonne National Laboratory on the failure of internally pressurized steam generator tubing at high temperatures ({le} 700 C). A model was developed for predicting failure of flawed and unflawed steam generator tubes under internal pressure and temperature histories postulated to occur during severe accidents. The model was validated by failure tests on specimens with part-through-wall axial and circumferential flaws of various lengths and depths, conducted under various constant and ramped internal pressure and temperature conditions. The failure temperatures predicted by the model for two temperature and pressure histories, calculated for severe accidents initiated by a station blackout, agree very well with tests performed on both flawed and unflawed specimens.

  8. Seasonal Predictability in a Model Atmosphere.

    NASA Astrophysics Data System (ADS)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  9. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-01

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response.

  10. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    PubMed

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-01

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response. PMID:26897572

  11. Urban daytime traffic noise prediction models.

    PubMed

    da Paz, Elaine Carvalho; Zannin, Paulo Henrique Trombetta

    2010-04-01

    An evaluation was made of the acoustic environment generated by an urban highway using in situ measurements. Based on the data collected, a mathematical model was designed for the main sound levels (L (eq), L (10), L (50), and L (90)) as a function of the correlation between sound levels and between the equivalent sound pressure level and traffic variables. Four valid groups of mathematical models were generated to calculate daytime sound levels, which were statistically validated. It was found that the new models can be considered as accurate as other models presented in the literature to assess and predict daytime traffic noise, and that they stand out and differ from the existing models described in the literature thanks to two characteristics, namely, their linearity and the application of class intervals.

  12. Model atmospheres, predicted spectra, and colors

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Theoretical models of stellar atmospheres and the process of forming a spectrum are reviewed with particular reference to the spectra of B stars. In the case of classical models the stellar atmosphere is though to consist of plane parallel layers of gas in which radiative and hydrostatic equilibrium exists. No radiative energy is lost or gained in the model atmosphere, but the detailed shape of the spectrum is changed as a result of the interactions with the ionized gas. Predicted line spectra using statistical equilibrium local thermodynamic equilibrium (LTE), and non-LTE physics are compared and the determination of abundances is discussed. The limitations of classical modeling are examined. Models developed to demonstrate what motions in the upper atmosphere will do to the spectrum and to explore the effects of using geometries different from plane parallel layer are reviewed. In particular the problem of radiative transfer is addressed.

  13. A kinetic model for predicting biodegradation.

    PubMed

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  14. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  15. Disease Prediction Models and Operational Readiness

    SciTech Connect

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  16. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  17. Radiotherapy Accidents

    NASA Astrophysics Data System (ADS)

    Mckenzie, Alan

    A major benefit of a Quality Assurance system in a radiotherapy centre is that it reduces the likelihood of an accident. For over 20 years I have been the interface in the UK between the Institute of Physics and Engineering in Medicine and the media — newspapers, radio and TV — and so I have learned about radiotherapy accidents from personal experience. In some cases, these accidents did not become public and so the hospital cannot be identified. Nevertheless, lessons are still being learned.

  18. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  19. Predictive Modeling of the CDRA 4BMS

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  20. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  1. A predictive geologic model of radon occurrence

    SciTech Connect

    Gregg, L.T. )

    1990-01-01

    Earlier work by LeGrand on predictive geologic models for radon focused on hydrogeologic aspects of radon transport from a given uranium/radium source in a fractured crystalline rock aquifer, and included submodels for bedrock lithology (uranium concentration), topographic slope, and water-table behavior and characteristics. LeGrand's basic geologic model has been modified and extended into a submodel for crystalline rocks (Blue Ridge and Piedmont Provinces) and a submodel for sedimentary rocks (Valley and Ridge and Coastal Plain Provinces). Each submodel assigns a ranking of 1 to 15 to the bedrock type, based on (a) known or supposed uranium/thorium content, (b) petrography/lithology, and (c) structural features such as faults, shear or breccia zones, diabase dikes, and jointing/fracturing. The bedrock ranking is coupled with a generalized soil/saprolite model which ranks soil/saprolite type and thickness from 1 to 10. A given site is thus assessed a ranking of 1 to 150 as a guide to its potential for high radon occurrence in the upper meter or so of soil. Field trials of the model are underway, comparing model predictions with measured soil-gas concentrations of radon.

  2. Constructing predictive models of human running.

    PubMed

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-01

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics.

  3. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  4. Progress towards a PETN Lifetime Prediction Model

    SciTech Connect

    Burnham, A K; Overturf III, G E; Gee, R; Lewis, P; Qiu, R; Phillips, D; Weeks, B; Pitchimani, R; Maiti, A; Zepeda-Ruiz, L; Hrousis, C

    2006-09-11

    Dinegar (1) showed that decreases in PETN surface area causes EBW detonator function times to increase. Thermal aging causes PETN to agglomerate, shrink, and densify indicating a ''sintering'' process. It has long been a concern that the formation of a gap between the PETN and the bridgewire may lead to EBW detonator failure. These concerns have led us to develop a model to predict the rate of coarsening that occurs with age for thermally driven PETN powder (50% TMD). To understand PETN contributions to detonator aging we need three things: (1) Curves describing function time dependence on specific surface area, density, and gap. (2) A measurement of the critical gap distance for no fire as a function of density and surface area for various wire configurations. (3) A model describing how specific surface area, density and gap change with time and temperature. We've had good success modeling high temperature surface area reduction and function time increase using a phenomenological deceleratory kinetic model based on a distribution of parallel nth-order reactions having evenly spaced activation energies where weighing factors of the reactions follows a Gaussian distribution about the reaction with the mean activation energy (Figure 1). Unfortunately, the mean activation energy derived from this approach is high (typically {approx}75 kcal/mol) so that negligible sintering is predicted for temperatures below 40 C. To make more reliable predictions, we've established a three-part effort to understand PETN mobility. First, we've measured the rates of step movement and pit nucleation as a function of temperature from 30 to 50 C for single crystals. Second, we've measured the evaporation rate from single crystals and powders from 105 to 135 C to obtain an activation energy for evaporation. Third, we've pursued mechanistic kinetic modeling of surface mobility, evaporation, and ripening.

  5. Predictability of extreme values in geophysical models

    NASA Astrophysics Data System (ADS)

    Sterk, Alef; Holland, Mark; Rabassa, Pau; Broer, Henk; Vitolo, Renato

    2014-05-01

    Classical extreme value theory studies the occurrence of unlikely large events. Extreme value theory was originally developed for time series of near-independent random variables, but in the last decade the theory has been extended to the setting of chaotic, deterministic dynamical systems. In the latter context one studies the distribution of large values in a time series generated by evaluating a scalar observable along evolutions of the system. We have studied the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. To that end we computed finite-time Lyapunov exponents (FTLEs) which measure the exponential growth rate of nearby trajectories over a finite time. In general, FTLEs strongly depend on the initial condition. We study whether initial conditions leading to extremes typically have a larger or smaller FTLE. Our study clearly suggests that general statements about the predictability of extreme values are not possible: the predictability of extreme values depends on (1) the observable, (2) the attractor of the system, and (3) the prediction lead time.

  6. Radiation risk models for all solid cancers other than those types of cancer requiring individual assessments after a nuclear accident.

    PubMed

    Walsh, Linda; Zhang, Wei

    2016-03-01

    In the assessment of health risks after nuclear accidents, some health consequences require special attention. For example, in their 2013 report on health risk assessment after the Fukushima nuclear accident, the World Health Organisation (WHO) panel of experts considered risks of breast cancer, thyroid cancer and leukaemia. For these specific cancer types, use was made of already published excess relative risk (ERR) and excess absolute risk (EAR) models for radiation-related cancer incidence fitted to the epidemiological data from the Japanese A-bomb Life Span Study (LSS). However, it was also considered important to assess all other types of solid cancer together and the WHO, in their above-mentioned report, stated "No model to calculate the risk for all other solid cancer excluding breast and thyroid cancer risks is available from the LSS data". Applying the LSS models for all solid cancers along with the models for the specific sites means that some cancers have an overlap in the risk evaluations. Thus, calculating the total solid cancer risk plus the breast cancer risk plus the thyroid cancer risk can overestimate the total risk by several per cent. Therefore, the purpose of this paper was to publish the required models for all other solid cancers, i.e. all solid cancers other than those types of cancer requiring special attention after a nuclear accident. The new models presented here have been fitted to the same LSS data set from which the risks provided by the WHO were derived. Although it is known already that the EAR and ERR effect modifications by sex are statistically significant for the outcome "all solid cancer", it is shown here that sex modification is not statistically significant for the outcome "all solid cancer other than thyroid and breast cancer". It is also shown here that the sex-averaged solid cancer risks with and without the sex modification are very similar once breast and thyroid cancers are factored out. Some other notable model

  7. Radiation risk models for all solid cancers other than those types of cancer requiring individual assessments after a nuclear accident.

    PubMed

    Walsh, Linda; Zhang, Wei

    2016-03-01

    In the assessment of health risks after nuclear accidents, some health consequences require special attention. For example, in their 2013 report on health risk assessment after the Fukushima nuclear accident, the World Health Organisation (WHO) panel of experts considered risks of breast cancer, thyroid cancer and leukaemia. For these specific cancer types, use was made of already published excess relative risk (ERR) and excess absolute risk (EAR) models for radiation-related cancer incidence fitted to the epidemiological data from the Japanese A-bomb Life Span Study (LSS). However, it was also considered important to assess all other types of solid cancer together and the WHO, in their above-mentioned report, stated "No model to calculate the risk for all other solid cancer excluding breast and thyroid cancer risks is available from the LSS data". Applying the LSS models for all solid cancers along with the models for the specific sites means that some cancers have an overlap in the risk evaluations. Thus, calculating the total solid cancer risk plus the breast cancer risk plus the thyroid cancer risk can overestimate the total risk by several per cent. Therefore, the purpose of this paper was to publish the required models for all other solid cancers, i.e. all solid cancers other than those types of cancer requiring special attention after a nuclear accident. The new models presented here have been fitted to the same LSS data set from which the risks provided by the WHO were derived. Although it is known already that the EAR and ERR effect modifications by sex are statistically significant for the outcome "all solid cancer", it is shown here that sex modification is not statistically significant for the outcome "all solid cancer other than thyroid and breast cancer". It is also shown here that the sex-averaged solid cancer risks with and without the sex modification are very similar once breast and thyroid cancers are factored out. Some other notable model

  8. Predictive model of radiative neutrino masses

    NASA Astrophysics Data System (ADS)

    Babu, K. S.; Julio, J.

    2014-03-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: the hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with δCP=π; and the effective mass in neutrinoless double beta decay lies in a narrow range, mββ=(17.6-18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tanβ, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The nonstandard neutral Higgs bosons, if they are moderately heavy, would decay dominantly into μ and τ with prescribed branching ratios. Observable rates for the decays μ →eγ and τ→3μ are predicted if these scalars have masses in the range of 150-500 GeV.

  9. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.

    1986-01-01

    A methodology is established to predict thermal barrier coating life in a environment similar to that experienced by gas turbine airfoils. Experiments were conducted to determine failure modes of the thermal barrier coating. Analytical studies were employed to derive a life prediction model. A review of experimental and flight service components as well as laboratory post evaluations indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the topologically complex metal ceramic interface. This mechanical failure mode clearly is influenced by thermal exposure effects as shown in experiments conducted to study thermal pre-exposure and thermal cycle-rate effects. The preliminary life prediction model developed focuses on the two major damage modes identified in the critical experiments tasks. The first of these involves a mechanical driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads. The second is an environmental driving force based on experimental results, and is believed to be related to bond coat oxidation. It is also believed that the growth of this oxide scale influences the intensity of the mechanical driving force.

  10. Enzyme function prediction with interpretable models.

    PubMed

    Syed, Umar; Yona, Golan

    2009-01-01

    Enzymes play central roles in metabolic pathways, and the prediction of metabolic pathways in newly sequenced genomes usually starts with the assignment of genes to enzymatic reactions. However, genes with similar catalytic activity are not necessarily similar in sequence, and therefore the traditional sequence similarity-based approach often fails to identify the relevant enzymes, thus hindering efforts to map the metabolome of an organism.Here we study the direct relationship between basic protein properties and their function. Our goal is to develop a new tool for functional prediction (e.g., prediction of Enzyme Commission number), which can be used to complement and support other techniques based on sequence or structure information. In order to define this mapping we collected a set of 453 features and properties that characterize proteins and are believed to be related to structural and functional aspects of proteins. We introduce a mixture model of stochastic decision trees to learn the set of potentially complex relationships between features and function. To study these correlations, trees are created and tested on the Pfam classification of proteins, which is based on sequence, and the EC classification, which is based on enzymatic function. The model is very effective in learning highly diverged protein families or families that are not defined on the basis of sequence. The resulting tree structures highlight the properties that are strongly correlated with structural and functional aspects of protein families, and can be used to suggest a concise definition of a protein family.

  11. A prediction model for Clostridium difficile recurrence

    PubMed Central

    LaBarbera, Francis D.; Nikiforov, Ivan; Parvathenani, Arvin; Pramil, Varsha; Gorrepati, Subhash

    2015-01-01

    Background Clostridium difficile infection (CDI) is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR); however, there is little consensus on the impact of most of the identified risk factors. Methods Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR) from January 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF) to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions We hope that in the future, machine learning algorithms, such as the RF, will see a wider application. PMID:25656667

  12. Modeling and Prediction of Krueger Device Noise

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  13. Ground Motion Prediction Models for Caucasus Region

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  14. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  15. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  16. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  17. Decadal prediction with a high resolution model

    NASA Astrophysics Data System (ADS)

    Monerie, Paul-Arthur; Valcke, Sophie; Terray, Laurent; Moine, Marie-Pierre

    2016-04-01

    The ability of a high resolution coupled atmosphere-ocean general circulation model (with a horizontal resolution of the quarter degree in the ocean and of about 50 km in the atmosphere) to predict the annual means of temperature, precipitation, sea-ice volume and extent is assessed. Reasonable skill in predicting sea surface temperatures and surface air temperature is obtained, especially over the North Atlantic, the tropical Atlantic and the Indian Ocean. The skill in predicting precipitations is weaker and not significant. The Sea Ice Extent and volume are also reasonably predicted in winter (March) and summer (September). It is however argued that the skill is mainly due to the atmosphere feeding in well-mixed GHGs. The mid-90's subpolar gyre warming is assessed. The model simulates a warming of the North Atlantic Ocean, associated with an increase of the meridional heat transport, a strengthening of the North Atlantic current and a deepening of the mixed layer over the Labrador Sea. The atmosphere plays a role in the warming through a modulation of the North Atlantic Oscillation and a shrinking of the subpolar gyre. At the 3-8 years lead-time, a negative anomaly of pressure, located south of the subpolar gyre is associated with the wind speed decrease over the subpolar gyre. It prevents oceanic heat-loss and favors the northward move, from the subtropical to the subpolar gyre, of anomalously warm and salty water, leading to its warming. We finally argued that the subpolar gyre warming is triggered by the ocean dynamic but the atmosphere can contributes to its sustaining. This work is realised in the framework of the EU FP7 SPECS Project.

  18. Lagrangian predictability characteristics of an Ocean Model

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  19. An Anisotropic Hardening Model for Springback Prediction

    NASA Astrophysics Data System (ADS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  20. Predictions in multifield models of inflation

    SciTech Connect

    Frazer, Jonathan

    2014-01-01

    This paper presents a method for obtaining an analytic expression for the density function of observables in multifield models of inflation with sum-separable potentials. The most striking result is that the density function in general possesses a sharp peak and the location of this peak is only mildly sensitive to the distribution of initial conditions. A simple argument is given for why this result holds for a more general class of models than just those with sum-separable potentials and why for such models, it is possible to obtain robust predictions for observable quantities. As an example, the joint density function of the spectral index and running in double quadratic inflation is computed. For scales leaving the horizon 55 e-folds before the end of inflation, the density function peaks at n{sub s} = 0.967 and α = 0.0006 for the spectral index and running respectively.

  1. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  2. Health effects models for nuclear power plant accident consequence analysis. Part 1, Introduction, integration, and summary: Revision 2

    SciTech Connect

    Evans, J.S.; Abrahmson, S.; Bender, M.A.; Boecker, B.B.; Scott, B.R.; Gilbert, E.S.

    1993-10-01

    This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, models are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.

  3. Visual Performance Prediction Using Schematic Eye Models

    NASA Astrophysics Data System (ADS)

    Schwiegerling, James Theodore

    The goal of visual modeling is to predict the visual performance or a change in performance of an individual from a model of the human visual system. In designing a model of the human visual system, two distinct functions are considered. The first is the production of an image incident on the retina by the optical system of the eye, and the second is the conversion of this image into a perceived image by the retina and brain. The eye optics are evaluated using raytracing techniques familiar to the optical engineer. The effect of the retinal and brain function are combined with the raytracing results by analyzing the modulation of the retinal image. Each of these processes is important far evaluating the performance of the entire visual system. Techniques for converting the abstract system performance measures used by optical engineers into clinically -applicable measures such as visual acuity and contrast sensitivity are developed in this dissertation. Furthermore, a methodology for applying videokeratoscopic height data to the visual model is outlined. These tools are useful in modeling the visual effects of corrective lenses, ocular maladies and refractive surgeries. The modeling techniques are applied to examples of soft contact lenses, keratoconus, radial keratotomy, photorefractive keratectomy and automated lamellar keratoplasty. The modeling tools developed in this dissertation are meant to be general and modular. As improvements to the measurements of the properties and functionality of the various visual components are made, the new information can be incorporated into the visual system model. Furthermore, the examples discussed here represent only a small subset of the applications of the visual model. Additional ocular maladies and emerging refractive surgeries can be modeled as well.

  4. Simple predictions from multifield inflationary models.

    PubMed

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  5. Modeling of long range transport pathways for radionuclides to Korea during the Fukushima Dai-ichi nuclear accident and their association with meteorological circulations.

    PubMed

    Lee, Kwan-Hee; Kim, Ki-Hyun; Lee, Jin-Hong; Yun, Ju-Yong; Kim, Cheol-Hee

    2015-10-01

    The Lagrangian FLEXible PARTicle (FLEXPART) dispersion model and National Centers for Environmental Prediction/Global Forecast System (NCEP/GFS) meteorological data were used to simulate the long range transport pathways of three artificial radionuclides: (131)I, (137)Cs, and (133)Xe, coming into Korean Peninsula during the Fukushima Dai-ichi nuclear accident. Using emission rates of these radionuclides estimated from previous studies, three distinctive transport routes of these radionuclides toward the Korean Peninsula for a period from 10 March to 20 April 2011 were exploited by three spatial scales: 1) intercontinental scale - plume released since mid-March 2011 and transported to the North to arrive Korea on 23 March 2011, 2) global (hemispherical) scale - plume traveling over the whole northern hemisphere passing through the Pacific Ocean/Europe to reach the Korean Peninsula with relatively low concentrations in late March 2011 and, 3) regional scale - plume released on early April 2011 arrived at the Korean Peninsula via southwest sea of Japan influenced directly by veering mesoscale wind circulations. Our identification of these transport routes at three different scales of meteorological circulations suggests the feasibility of a multi-scale approach for more accurate prediction of radionuclide transport in the study area. In light of the fact that the observed arrival/duration time of peaks were explained well by the FLEXPART model coupled with NCEP/GFS input data, our approach can be used meaningfully as a decision support model for radiation emergency situations. PMID:26149179

  6. Modeling of long range transport pathways for radionuclides to Korea during the Fukushima Dai-ichi nuclear accident and their association with meteorological circulations.

    PubMed

    Lee, Kwan-Hee; Kim, Ki-Hyun; Lee, Jin-Hong; Yun, Ju-Yong; Kim, Cheol-Hee

    2015-10-01

    The Lagrangian FLEXible PARTicle (FLEXPART) dispersion model and National Centers for Environmental Prediction/Global Forecast System (NCEP/GFS) meteorological data were used to simulate the long range transport pathways of three artificial radionuclides: (131)I, (137)Cs, and (133)Xe, coming into Korean Peninsula during the Fukushima Dai-ichi nuclear accident. Using emission rates of these radionuclides estimated from previous studies, three distinctive transport routes of these radionuclides toward the Korean Peninsula for a period from 10 March to 20 April 2011 were exploited by three spatial scales: 1) intercontinental scale - plume released since mid-March 2011 and transported to the North to arrive Korea on 23 March 2011, 2) global (hemispherical) scale - plume traveling over the whole northern hemisphere passing through the Pacific Ocean/Europe to reach the Korean Peninsula with relatively low concentrations in late March 2011 and, 3) regional scale - plume released on early April 2011 arrived at the Korean Peninsula via southwest sea of Japan influenced directly by veering mesoscale wind circulations. Our identification of these transport routes at three different scales of meteorological circulations suggests the feasibility of a multi-scale approach for more accurate prediction of radionuclide transport in the study area. In light of the fact that the observed arrival/duration time of peaks were explained well by the FLEXPART model coupled with NCEP/GFS input data, our approach can be used meaningfully as a decision support model for radiation emergency situations.

  7. Status report of advanced cladding modeling work to assess cladding performance under accident conditions

    SciTech Connect

    B.J. Merrill; Shannon M. Bragg-Sitton

    2013-09-01

    Scoping simulations performed using a severe accident code can be applied to investigate the influence of advanced materials on beyond design basis accident progression and to identify any existing code limitations. In 2012 an effort was initiated to develop a numerical capability for understanding the potential safety advantages that might be realized during severe accident conditions by replacing Zircaloy components in light water reactors (LWRs) with silicon carbide (SiC) components. To this end, a version of the MELCOR code, under development at the Sandia National Laboratories in New Mexico (SNL/NM), was modified by replacing Zircaloy for SiC in the MELCOR reactor core oxidation and material properties routines. The modified version of MELCOR was benchmarked against available experimental data to ensure that present SiC oxidation theory in air and steam were correctly implemented in the code. Additional modifications have been implemented in the code in 2013 to improve the specificity in defining components fabricated from non-standard materials. An overview of these modifications and the status of their implementation are summarized below.

  8. Critical conceptualism in environmental modeling and prediction.

    PubMed

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river. PMID:14594379

  9. Predictive modelling of boiler fouling. Final report.

    SciTech Connect

    Chatwani, A

    1990-12-31

    A spectral element method embodying Large Eddy Simulation based on Re- Normalization Group theory for simulating Sub Grid Scale viscosity was chosen for this work. This method is embodied in a computer code called NEKTON. NEKTON solves the unsteady, 2D or 3D,incompressible Navier Stokes equations by a spectral element method. The code was later extended to include the variable density and multiple reactive species effects at low Mach numbers, and to compute transport of large particles governed by inertia. Transport of small particles is computed by treating them as trace species. Code computations were performed for a number of test conditions typical of flow past a deep tube bank in a boiler. Results indicate qualitatively correct behavior. Predictions of deposition rates and deposit shape evolution also show correct qualitative behavior. These simulations are the first attempts to compute flow field results at realistic flow Reynolds numbers of the order of 10{sup 4}. Code validation was not done; comparison with experiment also could not be made as many phenomenological model parameters, e.g., sticking or erosion probabilities and their dependence on experimental conditions were not known. The predictions however demonstrate the capability to predict fouling from first principles. Further work is needed: use of large or massively parallel machine; code validation; parametric studies, etc.

  10. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    SciTech Connect

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  11. RTMOD: an Internet based system to analyse the predictions of long-range atmospheric dispersion models

    NASA Astrophysics Data System (ADS)

    Bellasio, Roberto; Bianconi, Roberto; Graziani, Giovanni; Mosca, Sonia

    1999-08-01

    After the Chernobyl accident caused the atmospheric release of radioactive substances that contaminated most of the European territory, the importance of supporting the decisional process in emergency conditions with reliable long-range dispersion models was understood. Generally, the reliability of models is evaluated and verified through comparison against measurements gained during planned experiments or accidental releases. The proper evaluation is based on a set of appropriate statistical indices, each of them giving insight into the specific characteristics of the model. This paper describes an Internet-based system (RTMOD, real time model evaluation) developed to compare in real time, on a graphical and numerical basis, the prediction of several long-range dispersion models. The structure of the system and some examples are presented in the following of this paper. RTMOD was developed to compare model predictions from various 'dry runs' (such as fictitious atmospheric releases), but it can also be used to compare model results against measurements in the situation of an actual release. Hence it is also a useful tool in validating mathematical dispersion models. Moreover, provided that a certain number of models are used, RTMOD becomes also a useful tool in real time managing of accidental releases by indicating the probability that a fixed threshold value will be exceeded, based on the set of model predictions.

  12. Predictability of the Lorenz chaotic model

    NASA Astrophysics Data System (ADS)

    Evans, E.; Bhatti, N.; Kinney, J.; Pann, L.; Pena, M.; Yang, S.; Kalnay, E.; Hansen, J.

    2003-04-01

    The Lorenz (1963) model has been widely used as a prototype of chaotic behavior and an example of lack of long-term predictability. Its solution with standard parameter values depicts a two-regime distribution. We applied the breeding of unstable modes technique (Toth and Kalnay, 1993, 1997) to this model to determine the regions in the phase space with larger instabilities. As it turned out, the results show not only a coherent region of high instability, indicated by the larger values of the bred vector growth rates, but also the feasibility to develop simple forecasting rules to determine both whether a shift to the other regime will occur in the following cycle and how long the following regime will last.

  13. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J.; Sheffler, K.

    1984-01-01

    The objective of this program is to develop an integrated life prediction model accounting for all potential life-limiting Thermal Barrier Coating (TBC) degradation and failure modes including spallation resulting from cyclic thermal stress, oxidative degradation, hot corrosion, erosion, and foreign object damage (FOD). The mechanisms and relative importance of the various degradation and failure modes will be determined, and the methodology to predict predominant mode failure life in turbine airfoil application will be developed and verified. An empirically based correlative model relating coating life to parametrically expressed driving forces such as temperature and stress will be employed. The two-layer TBC system being investigated, designated PWA264, currently is in commercial aircraft revenue service. It consists of an inner low pressure chamber plasma-sprayed NiCoCrAlY metallic bond coat underlayer (4 to 6 mils) and an outer air plasma-sprayed 7 w/o Y2O3-ZrO2 (8 to 12 mils) ceramic top layer.

  14. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  15. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  16. Source term estimation using air concentration measurements and a Lagrangian dispersion model - Experiments with pseudo and real cesium-137 observations from the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Chai, Tianfeng; Draxler, Roland; Stein, Ariel

    2015-04-01

    A transfer coefficient matrix (TCM) was created in a previous study using a Lagrangian dispersion model to provide plume predictions under different emission scenarios. The TCM estimates the contribution of each emission period to all sampling locations and can be used to estimate source terms by adjusting emission rates to match the model prediction with the measurements. In this paper, the TCM is used to formulate a cost functional that measures the differences between the model predictions and the actual air concentration measurements. The cost functional also includes a background term which adds the differences between a first guess and the updated emission estimates. Uncertainties of the measurements, as well as those for the first guess of source terms are both considered in the cost functional. In addition, a penalty term is added to create a smooth temporal change in the release rate. The method is first tested with pseudo observations generated using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model at the same location and time as the actual observations. The inverse estimation system is able to accurately recover the release rates and performs better than a direct solution using singular value decomposition (SVD). It is found that computing ln(c) differences between model and observations is better than using the original concentration c differences in the cost functional. The inverse estimation results are not sensitive to artificially introduced observational errors or different first guesses. To further test the method, daily average cesium-137 air concentration measurements around the globe from the Fukushima nuclear accident are used to estimate the release of the radionuclide. Compared with the latest estimates by Katata et al. (2014), the recovered release rates successfully capture the main temporal variations. When using subsets of the measured data, the inverse estimation method still manages to identify most of the

  17. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  18. Analytical predictions for the performance of a reinforced concrete containment model subject to overpressurization

    SciTech Connect

    Weatherby, J.R.; Clauss, D.B.

    1987-01-01

    Under the sponsorship of the US Nuclear Regulatory Commission, Sandia National Laboratories is investigating methods for predicting the structural performance of nuclear reactor containment buildings under hypothesized severe accident conditions. As part of this program, a 1/6th-scale reinforced concrete containment model will be pressurized to failure in early 1987. Data generated by the test will be compared to analytical predictions of the structural response in order to assess the accuracy and reliability of the analytical techniques. As part of the pretest analysis effort, Sandia has conducted a number of analyses of the containment structure using the ABAQUS general purpose finite element code. This paper describes results from a nonlinear axisymmetric shell analysis as well as the material models and failure criteria used in conjunction with the analysis.

  19. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    PubMed

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology. PMID:23687472

  20. Data driven propulsion system weight prediction model

    NASA Technical Reports Server (NTRS)

    Gerth, Richard J.

    1994-01-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  1. Developing Models for Predictive Climate Science

    SciTech Connect

    Drake, John B; Jones, Philip W

    2007-01-01

    The Community Climate System Model results from a multi-agency collaboration designed to construct cutting-edge climate science simulation models for a broad research community. Predictive climate simulations are currently being prepared for the petascale computers of the near future. Modeling capabilities are continuously being improved in order to provide better answers to critical questions about Earth's climate. Climate change and its implications are front page news in today's world. Could global warming be responsible for the July 2006 heat waves in Europe and the United States? Should more resources be devoted to preparing for an increase in the frequency of strong tropical storms and hurricanes like Katrina? Will coastal cities be flooded due to a rise in sea level? The National Climatic Data Center (NCDC), which archives all weather data for the nation, reports that global surface temperatures have increased over the last century, and that the rate of increase is three times greater since 1976. Will temperatures continue to climb at this rate, will they decline again, or will the rate of increase become even steeper? To address such a flurry of questions, scientists must adopt a systematic approach and develop a predictive framework. With responsibility for advising on energy and technology strategies, the DOE is dedicated to advancing climate research in order to elucidate the causes of climate change, including the role of carbon loading from fossil fuel use. Thus, climate science--which by nature involves advanced computing technology and methods--has been the focus of a number of DOE's SciDAC research projects. Dr. John Drake (ORNL) and Dr. Philip Jones (LANL) served as principal investigators on the SciDAC project, 'Collaborative Design and Development of the Community Climate System Model for Terascale Computers.' The Community Climate System Model (CCSM) is a fully-coupled global system that provides state-of-the-art computer simulations of the

  2. Anthropometric dependence of the response of a thorax FE model under high speed loading: validation and real world accident replication.

    PubMed

    Roth, Sébastien; Torres, Fabien; Feuerstein, Philippe; Thoral-Pierre, Karine

    2013-05-01

    Finite element analysis is frequently used in several fields such as automotive simulations or biomechanics. It helps researchers and engineers to understand the mechanical behaviour of complex structures. The development of computer science brought the possibility to develop realistic computational models which can behave like physical ones, avoiding the difficulties and costs of experimental tests. In the framework of biomechanics, lots of FE models have been developed in the last few decades, enabling the investigation of the behaviour of the human body submitted to heavy damage such as in road traffic accidents or in ballistic impact. In both cases, the thorax/abdomen/pelvis system is frequently injured. The understanding of the behaviour of this complex system is of extreme importance. In order to explore the dynamic response of this system to impact loading, a finite element model of the human thorax/abdomen/pelvis system has, therefore, been developed including the main organs: heart, lungs, kidneys, liver, spleen, the skeleton (with vertebrae, intervertebral discs, ribs), stomach, intestines, muscles, and skin. The FE model is based on a 3D reconstruction, which has been made from medical records of anonymous patients, who have had medical scans with no relation to the present study. Several scans have been analyzed, and specific attention has been paid to the anthropometry of the reconstructed model, which can be considered as a 50th percentile male model. The biometric parameters and laws have been implemented in the dynamic FE code (Radioss, Altair Hyperworks 11©) used for dynamic simulations. Then the 50th percentile model was validated against experimental data available in the literature, in terms of deflection, force, whose curve must be in experimental corridors. However, for other anthropometries (small male or large male models) question about the validation and results of numerical accident replications can be raised.

  3. Anthropometric dependence of the response of a thorax FE model under high speed loading: validation and real world accident replication.

    PubMed

    Roth, Sébastien; Torres, Fabien; Feuerstein, Philippe; Thoral-Pierre, Karine

    2013-05-01

    Finite element analysis is frequently used in several fields such as automotive simulations or biomechanics. It helps researchers and engineers to understand the mechanical behaviour of complex structures. The development of computer science brought the possibility to develop realistic computational models which can behave like physical ones, avoiding the difficulties and costs of experimental tests. In the framework of biomechanics, lots of FE models have been developed in the last few decades, enabling the investigation of the behaviour of the human body submitted to heavy damage such as in road traffic accidents or in ballistic impact. In both cases, the thorax/abdomen/pelvis system is frequently injured. The understanding of the behaviour of this complex system is of extreme importance. In order to explore the dynamic response of this system to impact loading, a finite element model of the human thorax/abdomen/pelvis system has, therefore, been developed including the main organs: heart, lungs, kidneys, liver, spleen, the skeleton (with vertebrae, intervertebral discs, ribs), stomach, intestines, muscles, and skin. The FE model is based on a 3D reconstruction, which has been made from medical records of anonymous patients, who have had medical scans with no relation to the present study. Several scans have been analyzed, and specific attention has been paid to the anthropometry of the reconstructed model, which can be considered as a 50th percentile male model. The biometric parameters and laws have been implemented in the dynamic FE code (Radioss, Altair Hyperworks 11©) used for dynamic simulations. Then the 50th percentile model was validated against experimental data available in the literature, in terms of deflection, force, whose curve must be in experimental corridors. However, for other anthropometries (small male or large male models) question about the validation and results of numerical accident replications can be raised. PMID:23246086

  4. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  5. Predictive modeling of low solubility semiconductor alloys

    NASA Astrophysics Data System (ADS)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  6. Examining the nonparametric effect of drivers' age in rear-end accidents through an additive logistic regression model.

    PubMed

    Ma, Lu; Yan, Xuedong

    2014-06-01

    This study seeks to inspect the nonparametric characteristics connecting the age of the driver to the relative risk of being an at-fault vehicle, in order to discover a more precise and smooth pattern of age impact, which has commonly been neglected in past studies. Records of drivers in two-vehicle rear-end collisions are selected from the general estimates system (GES) 2011 dataset. These extracted observations in fact constitute inherently matched driver pairs under certain matching variables including weather conditions, pavement conditions and road geometry design characteristics that are shared by pairs of drivers in rear-end accidents. The introduced data structure is able to guarantee that the variance of the response variable will not depend on the matching variables and hence provides a high power of statistical modeling. The estimation results exhibit a smooth cubic spline function for examining the nonlinear relationship between the age of the driver and the log odds of being at fault in a rear-end accident. The results are presented with respect to the main effect of age, the interaction effect between age and sex, and the effects of age under different scenarios of pre-crash actions by the leading vehicle. Compared to the conventional specification in which age is categorized into several predefined groups, the proposed method is more flexible and able to produce quantitatively explicit results. First, it confirms the U-shaped pattern of the age effect, and further shows that the risks of young and old drivers change rapidly with age. Second, the interaction effects between age and sex show that female and male drivers behave differently in rear-end accidents. Third, it is found that the pattern of age impact varies according to the type of pre-crash actions exhibited by the leading vehicle. PMID:24642249

  7. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-01

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger's injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  8. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    SciTech Connect

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-03

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger’s injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  9. Model predictive control of a wind turbine modelled in Simpack

    NASA Astrophysics Data System (ADS)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  10. Long term radiocesium contamination of fruit trees following the Chernobyl accident

    SciTech Connect

    Antonopoulos-Domis, M.; Clouvas, A.; Gagianas, A.

    1996-12-01

    Radiocesium contamination form the Chernobyl accident of fruits and leaves from various fruit trees was systematically studied form 1990-1995 on two agricultural experimentation farms in Northern Greece. The results are discussed in the framework of a previously published model describing the long-term radiocesium contamination mechanism of deciduous fruit trees after a nuclear accident. The results of the present work qualitatively verify the model predictions. 11 refs., 5 figs., 1 tab.

  11. A blind test of the MOIRA lake model for radiocesium for Lake Uruskul, Russia, contaminated by fallout from the Kyshtym accident in 1957.

    PubMed

    Håkanson, L; Sazykina, T

    2001-01-01

    This paper presents results of a model-test carried out within the framework of the COMETES project (EU). The tested model is a new lake model for radiocesium to be used within the MOIRA decision support system (DSS; MOIRA and COMETES are acronyms for EU-projects). This model has previously been validated against independent data from many lakes covering a wide domain of lake characteristics and been demonstrated to yield excellent predictive power (see Håkanson, Modelling Radiocesium in Lakes and Coastal Areas. Kluwer, Dordrecht, 2000, 215 pp). However, the model has not been tested before for cases other than those related to the Chernobyl fallout in 1986, nor for lakes from this part of the world (Southern Urals) and nor for situations with such heavy fallout as this. The aims of this work were: (1) to carry out a blind test of the model for the case of continental Lake Uruskul, heavily contaminated with 90Sr and 137Cs as a result of the Kyshtym radiation accident (29 September 1957) in the Southern Urals, Russia, and (2) if these tests gave satisfactory results to reconstruct the radiocesium dynamics for fish, water and sediments in the lake. Can the model provide meaningful predictions in a situation such as this? The answer is yes, although there are reservations due to the scarcity of reliable empirical data. From the modelling calculations, it may be noted that the maximum levels of 137Cs in fish (here 400 g ww goldfish), water and sediments were about 100,000 Bq/kg ww, 600 Bq/l and 30,000 Bq/kg dw, respectively. The values in fish are comparable to or higher than the levels in fish in the cooling pond of the Chernobyl NPP. The model also predicts an interesting seasonal pattern in 137Cs levels in sediments. There is also a characteristic "three phase" development for the 137Cs levels in fish: first an initial stage when the 137Cs concentrations in fish approach a maximum value, then a phase with relatively short ecological half-lives followed by a final

  12. Reconstruction of (131)I radioactive contamination in Ukraine caused by the Chernobyl accident using atmospheric transport modelling.

    PubMed

    Talerko, Nikolai

    2005-01-01

    The evaluation of (131)I air and ground contamination field formation in the territory of Ukraine was made using the model of atmospheric transport LEDI (Lagrangian-Eulerian DIffusion model). The (131)I atmospheric transport over the territory of Ukraine was simulated during the first 12 days after the accident (from 26 April to 7 May 1986) using real aerological information and rain measurement network data. The airborne (131)I concentration and ground deposition fields were calculated as the database for subsequent thyroid dose reconstruction for inhabitants of radioactive contaminated regions. The small-scale deposition field variability is assessed using data of (137)Cs detailed measurements in the territory of Ukraine. The obtained results are compared with available data of radioiodine daily deposition measurements made at the network of meteorological stations in Ukraine and data of the assessments of (131)I soil contamination obtained from the (129)I measurements. PMID:16024139

  13. Extension of SCDAP/RELAP5 severe accident models to non-LWR reactor designs. [Non-Light Water Reactors

    SciTech Connect

    Allison, C.M.; Siefken, L.J.; Hagrman, D.L. ); Cheng, T.C. )

    1990-01-01

    The SCDAP/RELAP5 code has been extended to calculate the core melt progression and fission product transport that may occur in non-LWR reactors during severe accidents. The code's approach of connecting together according to user instructions all of the parts that constitute a reactor system give the code the capability to model a wide range of reactor designs. The models added to the code for analyses of non-LWR reactors include: (a) oxidation and melt progression in cores with U-Al based fuel elements, (b) movement of liquefied material from its original place in the core to other parts of the reactor systems, such as the outlet piping, (c) fission product release from U-Al based fuel and zinc release from aluminum, and (d) fission product release from a pool of molten core material. 9 refs., 5 figs.

  14. Radiological dose assessment for bounding accident scenarios at the Critical Experiment Facility, TA-18, Los Alamos National Laboratory

    SciTech Connect

    1991-09-01

    A computer modeling code, CRIT8, was written to allow prediction of the radiological doses to workers and members of the public resulting from these postulated maximum-effect accidents. The code accounts for the relationships of the initial parent radionuclide inventory at the time of the accident to the growth of radioactive daughter products, and considers the atmospheric conditions at time of release. The code then calculates a dose at chosen receptor locations for the sum of radionuclides produced as a result of the accident. Both criticality and non-criticality accidents are examined.

  15. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  16. The SAM software system for modeling severe accidents at nuclear power plants equipped with VVER reactors on full-scale and analytic training simulators

    NASA Astrophysics Data System (ADS)

    Osadchaya, D. Yu.; Fuks, R. L.

    2014-04-01

    The architecture of the SAM software package intended for modeling beyond-design-basis accidents at nuclear power plants equipped with VVER reactors evolving into a severe stage with core melting and failure of the reactor pressure vessel is presented. By using the SAM software package it is possible to perform comprehensive modeling of the entire emergency process from the failure initiating event to the stage of severe accident involving meltdown of nuclear fuel, failure of the reactor pressure vessel, and escape of corium onto the concrete basement or into the corium catcher with retention of molten products in it.

  17. Model-based Heart rate prediction during Lokomat walking.

    PubMed

    Koenig, Alexander C; Somaini, Luca; Pulfer, Michael; Holenstein, Thomas; Omlin, Ximena; Wieser, Martin; Riener, Robert

    2009-01-01

    We implemented a model for prediction of heart rate during Lokomat walking. Using this model, we can predict potential overstressing of the patient and adapt the physical load accordingly. Current models for treadmill based heart rate control neglect the fact that the interaction torques between Lokomat and human can have a significant effect on heart rate. Tests with five healthy subjects lead to a model of sixth order with walking speed and power expenditure as inputs and heart rate prediction as output. Recordings with five different subjects were used for model validation. Future work includes model identification and predictive heart rate control with spinal cord injured and stroke patients. PMID:19963765

  18. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  19. Predictive models for moving contact line flows

    NASA Technical Reports Server (NTRS)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  20. Towards a Predictive Model of Elastomer seals

    NASA Astrophysics Data System (ADS)

    Khawaja, Musab; Mostofi, Arash; Sutton, Adrian; Stevens, John

    2014-03-01

    Elastomers are a highly versatile class of material. Their diversity of technological application is enabled by the fact that their properties may be tuned through manipulation of their constituent building blocks at multiple length-scales. These scales range from the chemical groups within individual monomers, to the overall morphology on the mesoscale, as well as through compounding with other materials. An important use of elastomers is in seals for mechanical components. Ideally, such seals should act as impermeable barriers to gases and liquids, preventing contamination and damage to equipment. Elastomer failure, therefore, can be extremely costly and is a matter of great importance to industry. The question at the centre of this work relates to the failure of elastomer seals via explosive decompression. This mechanism is a result of permeation of gas molecules through the seals at high pressures, and their subsequent rapid egress upon removal of the elevated pressures. The goal is to develop a model to better understand and predict the structure, porosity and transport of molecular species through elastomer seals, with a view to elucidating general design principles that will inform the development of higher performance materials.

  1. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.

    1985-01-01

    This is the first report of the first phase of a 3-year program. Its objectives are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system, then to develop and verify life prediction models accounting for these degradation modes. The first task (Task I) is to determine the major failure mechanisms. Presently, bond coat oxidation and bond coat creep are being evaluated as potential TBC failure mechanisms. The baseline TBC system consists of an air plasma sprayed ZrO2-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene'80 substrate. Pre-exposures in air and argon combined with thermal cycle tests in air and argon are being utilized to evaluate bond coat oxidation as a failure mechanism. Unexpectedly, the specimens pre-exposed in argon failed before the specimens pre-exposed in air in subsequent thermal cycles testing in air. Four bond coats with different creep strengths are being utilized to evaluate the effect of bond coat creep on TBC degradation. These bond coats received an aluminide overcoat prior to application of the top coat to reduce the differences in bond coat oxidation behavior. Thermal cycle testing has been initiated. Methods have been selected for measuring tensile strength, Poisson's ratio, dynamic modulus and coefficient of thermal expansion both of the bond coat and top coat layers.

  2. Predictability of the Indian Ocean Dipole in the coupled models

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2016-06-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  3. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for

  4. Further development of a predictive pitting model for gears: Improvements in the life prediction analysis

    NASA Astrophysics Data System (ADS)

    Blake, J. W.; Draper, C. F.

    1994-04-01

    A predictive pitting model for gear design applications was recently developed by Blake and Cheng. Life estimates were based on predicting the growth of surface-breaking cracks leading to pit formation. While trends predicted by the model reflected observed behavior, estimated lives were lower than expected. The crack growth model has been improved by modifying the original shear-driven, two-dimensional propagation model to reflect three-dimensional cracks driven by both shear and lubricant pressure effects. Resistance to crack growth due to friction between the crack faces has also been considered. These changes have led to a net increase in predicted lives, which better reflects observed pitting behavior.

  5. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  6. [Study of prediction models for oil thickness based on spectral curve].

    PubMed

    Sun, Peng; Song, Mei-Ping; An, Ju-Bai

    2013-07-01

    Nowdays, oil spill accidents on sea occur frequently. It is a practical topic to estimate the amount of spilled oil, which is helpful for the subsequent processing and loss assessment. With the rapid development of hyperspectral remote sensing technology, estimating the oil thickness becomes possible. Firstly, a series of oil thicknesses are tested with the AvaSpec Spectrometer to get their corresponding spectral curves. And then the characteristics of the spectral curve are extracted to analyze their relationship with the oil thickness. The study shows that the oil thickness has large correlation with variables based on hyperspectral positions such as R(g), R(o), and vegetation indexes such as RDVI, TVI and Haboudane. Curve fitting, BP neural network and SVD iteration method were chosen to build the prediction models for oil thicknesses. Finally, the analysis and evaluation of each estimating model are provided. PMID:24059194

  7. Allostasis: a model of predictive regulation.

    PubMed

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  8. The myth of science-based predictive modeling.

    SciTech Connect

    Hemez, F. M.

    2004-01-01

    A key aspect of science-based predictive modeling is the assessment of prediction credibility. This publication argues that the credibility of a family of models and their predictions must combine three components: (1) the fidelity of predictions to test data; (2) the robustness of predictions to variability, uncertainty, and lack-of-knowledge; and (3) the prediction accuracy of models in cases where measurements are not available. Unfortunately, these three objectives are antagonistic. A recently published Theorem that demonstrates the irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty, and confidence in prediction is summarized. High-fidelity models cannot be made increasingly robust to uncertainty and lack-of-knowledge. Similarly, robustness-to-uncertainty can only be improved at the cost of reducing the confidence in prediction. The concept of confidence in prediction relies on a metric for total uncertainty, capable of aggregating different representations of uncertainty (probabilistic or not). The discussion is illustrated with an engineering application where a family of models is developed to predict the acceleration levels obtained when impacts of varying levels propagate through layers of crushable hyper-foam material of varying thicknesses. Convex modeling is invoked to represent a severe lack-of-knowledge about the constitutive material behavior. The analysis produces intervals of performance metrics from which the total uncertainty and confidence levels are estimated. Finally, performance, robustness and confidence are extrapolated throughout the validation domain to assess the predictive power of the family of models away from tested configurations.

  9. Supercomputer predictive modeling for ensuring space flight safety

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Smirnov, N. N.; Nikitin, V. F.

    2015-04-01

    Development of new types of rocket engines, as well as upgrading the existing engines needs computer aided design and mathematical tools for supercomputer modeling of all basic processes of mixing, ignition, combustion and outflow through the nozzle. Even small upgrades and changes introduced in existing rocket engines without proper simulations cause severe accidents at launch places witnessed recently. The paper presents the results of computer code developing, verification and validation, making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  10. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  11. Autonomous formation flight of helicopters: Model predictive control approach

    NASA Astrophysics Data System (ADS)

    Chung, Hoam

    Formation flight is the primary movement technique for teams of helicopters. However, the potential for accidents is greatly increased when helicopter teams are required to fly in tight formations and under harsh conditions. This dissertation proposes that the automation of helicopter formations is a realistic solution capable of alleviating risks. Helicopter formation flight operations in battlefield situations are highly dynamic and dangerous, and, therefore, we maintain that both a high-level formation management system and a distributed coordinated control algorithm should be implemented to help ensure safe formations. The starting point for safe autonomous formation flights is to design a distributed control law attenuating external disturbances coming into a formation, so that each vehicle can safely maintain sufficient clearance between it and all other vehicles. While conventional methods are limited to homogeneous formations, our decentralized model predictive control (MPC) approach allows for heterogeneity in a formation. In order to avoid the conservative nature inherent in distributed MPC algorithms, we begin by designing a stable MPC for individual vehicles, and then introducing carefully designed inter-agent coupling terms in a performance index. Thus the proposed algorithm works in a decentralized manner, and can be applied to the problem of helicopter formations comprised of heterogenous vehicles. Individual vehicles in a team may be confronted by various emerging situations that will require the capability for in-flight reconfiguration. We propose the concept of a formation manager to manage separation, join, and synchronization of flight course changes. The formation manager accepts an operator's commands, information from neighboring vehicles, and its own vehicle states. Inside the formation manager, there are multiple modes and complex mode switchings represented as a finite state machine (FSM). Based on the current mode and collected

  12. Predictable Components of ENSO Evolution in Real-time Multi-Model Predictions

    PubMed Central

    Zheng, Zhihai; Hu, Zeng-Zhen; L’Heureux, Michelle

    2016-01-01

    The most predictable components of the El Niño-Southern Oscillation (ENSO) evolution in real-time multi-model predictions are identified by applying an empirical orthogonal function analysis of the model data that maximizes the signal-to-noise ratio (MSN EOF). The normalized Niño3.4 index is analyzed for nine 3-month overlapping seasons. In this sense, the first most predictable component (MSN EOF1) is the decaying phase of ENSO during the Northern Hemisphere spring, followed by persistence through autumn and winter. The second most predictable component of ENSO evolution, with lower prediction skill and smaller explained variance than MSN EOF1, corresponds to the growth during spring and then persistence in summer and autumn. This result suggests that decay phase of ENSO is more predictable than the growth phase. Also, the most predictable components and the forecast skills in dynamical and statistical models are similar overall, with some differences arising during spring season initial conditions. Finally, the reconstructed predictions, with only the first two MSN components, show higher skill than the model raw predictions. Therefore this method can be used as a diagnostic for model comparison and development, and it can provide a new perspective for the most predictable components of ENSO. PMID:27775016

  13. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative

  14. From Predictive Models to Instructional Policies

    ERIC Educational Resources Information Center

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  15. A SCOPING STUDY: Development of Probabilistic Risk Assessment Models for Reactivity Insertion Accidents During Shutdown In U.S. Commercial Light Water Reactors

    SciTech Connect

    S. Khericha

    2011-06-01

    This report documents the scoping study of developing generic simplified fuel damage risk models for quantitative analysis from inadvertent reactivity insertion events during shutdown (SD) in light water pressurized and boiling water reactors. In the past, nuclear fuel reactivity accidents have been analyzed both mainly deterministically and probabilistically for at-power and SD operations of nuclear power plants (NPPs). Since then, many NPPs had power up-rates and longer refueling intervals, which resulted in fuel configurations that may potentially respond differently (in an undesirable way) to reactivity accidents. Also, as shown in a recent event, several inadvertent operator actions caused potential nuclear fuel reactivity insertion accident during SD operations. The set inadvertent operator actions are likely to be plant- and operation-state specific and could lead to accident sequences. This study is an outcome of the concern which arose after the inadvertent withdrawal of control rods at Dresden Unit 3 in 2008 due to operator actions in the plant inadvertently three control rods were withdrawn from the reactor without knowledge of the main control room operator. The purpose of this Standardized Plant Analysis Risk (SPAR) Model development project is to develop simplified SPAR Models that can be used by staff analysts to perform risk analyses of operating events and/or conditions occurring during SD operation. These types of accident scenarios are dominated by the operator actions, (e.g., misalignment of valves, failure to follow procedures and errors of commissions). Human error probabilities specific to this model were assessed using the methodology developed for SPAR model human error evaluations. The event trees, fault trees, basic event data and data sources for the model are provided in the report. The end state is defined as the reactor becomes critical. The scoping study includes a brief literature search/review of historical events, developments of

  16. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  17. Radiation accidents

    SciTech Connect

    Saenger, E.L.

    1986-09-01

    It is essential that emergency physicians understand ways to manage patients contaminated by radioactive materials and/or exposed to external radiation sources. Contamination accidents require careful surveys to identify the metabolic pathway of the radionuclides to guide prognosis and treatment. The level of treatment required will depend on careful surveys and meticulous decontamination. There is no specific therapy for the acute radiation syndrome. Prophylactic antibodies are desirable. For severely exposed patients treatment is similar to the supportive care given to patients undergoing organ transplantation. For high-dose extremity injury, no methods have been developed to reverse the fibrosing endarteritis that eventually leads to tissue death so frequently found with this type of injury. Although the Three Mile Island episode of March 1979 created tremendous public concern, there were no radiation injuries. The contamination outside the reactor building and the release of radioiodine were negligible. The accidental fuel element meltdown at Chernobyl, USSR, resulted in many cases of acute radiation syndrome. More than 100,000 people were exposed to high levels of radioactive fallout. The general principles outlined here are applicable to accidents of that degree of severity.

  18. Radiation accidents.

    PubMed

    Saenger, E L

    1986-09-01

    It is essential that emergency physicians understand ways to manage patients contaminated by radioactive materials and/or exposed to external radiation sources. Contamination accidents require careful surveys to identify the metabolic pathway of the radionuclides to guide prognosis and treatment. The level of treatment required will depend on careful surveys and meticulous decontamination. There is no specific therapy for the acute radiation syndrome. Prophylactic antibodies are desirable. For severely exposed patients treatment is similar to the supportive care given to patients undergoing organ transplantation. For high-dose extremity injury, no methods have been developed to reverse the fibrosing endarteritis that eventually leads to tissue death so frequently found with this type of injury. Although the Three Mile Island episode of March 1979 created tremendous public concern, there were no radiation injuries. The contamination outside the reactor building and the release of radioiodine were negligible. The accidental fuel element meltdown at Chernobyl, USSR, resulted in many cases of acute radiation syndrome. More than 100,000 people were exposed to high levels of radioactive fallout. The general principles outlined here are applicable to accidents of that degree of severity. PMID:3526994

  19. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  20. Sweat loss prediction using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojiang; Santee, William R.

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  1. A windows based mechanistic subsidence prediction model for longwall mining

    SciTech Connect

    Begley, R.; Beheler, P.; Khair, A.W.

    1996-12-31

    The previously developed Mechanistic Subsidence Prediction Model (MSPM) has been incorporated into the graphical interface environment of MS Windows. MSPM has the unique capability of predicting maximum subsidence, angle of draw and the subsidence profile of a longwall panel at various locations for both the transverse and longitudinal orientations. The resultant enhanced model can be operated by individuals with little knowledge of subsidence prediction theories or little computer programming experience. In addition, predictions of subsidence can be made in a matter of seconds without the need to develop input data files or use the keyboard in some cases. The predictions are based upon the following input parameters: panel width, mining height, overburden depth, rock quality designation, and percent hard rock in the immediate roof, main roof and the entire overburden. The recently developed enhanced model has the capability to compare predictions in a graphical format for one half of the predicted subsidence profile based upon changes in input parameters easily and instantly on the same screen. In addition another screen can be obtained from a pull down menu where the operator can compare predictions for the entire subsidence profiles. This paper presents the background of the subsidence prediction model and the methodology of the enhanced model development. The paper also presents comparisons of subsidence predictions for several different sets of input parameters in addition to comparisons of the subsidence predictions with actual field data.

  2. Inference on biological mechanisms using an integrated phenotype prediction model.

    PubMed

    Enomoto, Yumi; Ushijima, Masaru; Miyata, Satoshi; Matsuura, Masaaki; Ohtaki, Megu

    2008-03-01

    We propose a methodology for constructing an integrated phenotype prediction model that accounts for multiple pathways regulating a targeted phenotype. The method uses multiple prediction models, each expressing a particular pattern of gene-to-gene interrelationship, such as epistasis. We also propose a methodology using Gene Ontology annotations to infer a biological mechanism from the integrated phenotype prediction model. To construct the integrated models, we employed multiple logistic regression models using a two-step learning approach to examine a number of patterns of gene-to-gene interrelationships. We first selected individual prediction models with acceptable goodness of fit, and then combined the models. The resulting integrated model predicts phenotype as a logical sum of predicted results from the individual models. We used published microarray data on neuroblastoma from Ohira et al (2005) for illustration, constructing an integrated model to predict prognosis and infer the biological mechanisms controlling prognosis. Although the resulting integrated model comprised a small number of genes compared to a previously reported analysis of these data, the model demonstrated excellent performance, with an error rate of 0.12 in a validation analysis. Gene Ontology analysis suggested that prognosis of patients with neuroblastoma may be influenced by biological processes such as cell growth, G-protein signaling, phosphoinositide-mediated signaling, alcohol metabolism, glycolysis, neurophysiological processes, and catecholamine catabolism. PMID:18578362

  3. Numerical human models for accident research and safety - potentials and limitations.

    PubMed

    Praxl, Norbert; Adamec, Jiri; Muggenthaler, Holger; von Merten, Katja

    2008-01-01

    The method of numerical simulation is frequently used in the area of automotive safety. Recently, numerical models of the human body have been developed for the numerical simulation of occupants. Different approaches in modelling the human body have been used: the finite-element and the multibody technique. Numerical human models representing the two modelling approaches are introduced and the potentials and limitations of these models are discussed.

  4. Internal Flow Thermal/Fluid Modeling of STS-107 Port Wing in Support of the Columbia Accident Investigation Board

    NASA Technical Reports Server (NTRS)

    Sharp, John R.; Kittredge, Ken; Schunk, Richard G.

    2003-01-01

    As part of the aero-thermodynamics team supporting the Columbia Accident Investigation Board (CAB), the Marshall Space Flight Center was asked to perform engineering analyses of internal flows in the port wing. The aero-thermodynamics team was split into internal flow and external flow teams with the support being divided between shorter timeframe engineering methods and more complex computational fluid dynamics. In order to gain a rough order of magnitude type of knowledge of the internal flow in the port wing for various breach locations and sizes (as theorized by the CAB to have caused the Columbia re-entry failure), a bulk venting model was required to input boundary flow rates and pressures to the computational fluid dynamics (CFD) analyses. This paper summarizes the modeling that was done by MSFC in Thermal Desktop. A venting model of the entire Orbiter was constructed in FloCAD based on Rockwell International s flight substantiation analyses and the STS-107 reentry trajectory. Chemical equilibrium air thermodynamic properties were generated for SINDA/FLUINT s fluid property routines from a code provided by Langley Research Center. In parallel, a simplified thermal mathematical model of the port wing, including the Thermal Protection System (TPS), was based on more detailed Shuttle re-entry modeling previously done by the Dryden Flight Research Center. Once the venting model was coupled with the thermal model of the wing structure with chemical equilibrium air properties, various breach scenarios were assessed in support of the aero-thermodynamics team. The construction of the coupled model and results are presented herein.

  5. Modeling of leachable 137Cs in throughfall and stemflow for Japanese forest canopies after Fukushima Daiichi Nuclear Power Plant accident.

    PubMed

    Loffredo, Nicolas; Onda, Yuichi; Kawamori, Ayumi; Kato, Hiroaki

    2014-09-15

    The Fukushima accident dispersed significant amounts of radioactive cesium (Cs) in the landscape. Our research investigated, from June 2011 to November 2013, the mobility of leachable Cs in forests canopies. In particular, (137)Cs and (134)Cs activity concentrations were measured in rainfall, throughfall, and stemflow in broad-leaf and cedar forests in an area located 40 km from the power plant. Leachable (137)Cs loss was modeled by a double exponential (DE) model. This model could not reproduce the variation in activity concentration observed. In order to refine the DE model, the main physical measurable parameters (rainfall intensity, wind velocity, and snowfall occurrence) were assessed, and rainfall was identified as the dominant factor controlling observed variation. A corrective factor was then developed to incorporate rainfall intensity in an improved DE model. With the original DE model, we estimated total (137)Cs loss by leaching from canopies to be 72 ± 4%, 67 ± 4%, and 48 ± 2% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. In contrast, with the improved DE model, the total (137)Cs loss by leaching was estimated to be 34 ± 2%, 34 ± 2%, and 16 ± 1% of the total plume deposition under mature cedar, young cedar, and broad-leaf forests, respectively. The improved DE model corresponds better to observed data in literature. Understanding (137)Cs and (134)Cs forest dynamics is important for forecasting future contamination of forest soils around the FDNPP. It also provides a basis for understanding forest transfers in future potential nuclear disasters.

  6. Predictive modeling and reducing cyclic variability in autoignition engines

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  7. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  8. Predicting Career Advancement with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  9. A Prediction Model of the Capillary Pressure J-Function

    PubMed Central

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  10. A model to predict the power output from wind farms

    SciTech Connect

    Landberg, L.

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  11. A Prediction Model of the Capillary Pressure J-Function.

    PubMed

    Xu, W S; Luo, P Y; Sun, L; Lin, N

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  12. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    EPA Science Inventory

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  13. Review of the chronic exposure pathways models in MACCS (MELCOR Accident Consequence Code System) and several other well-known probabilistic risk assessment models

    SciTech Connect

    Tveten, U. )

    1990-06-01

    The purpose of this report is to document the results of the work performed by the author in connection with the following task, performed for US Nuclear Regulatory Commission, (USNRC) Office of Nuclear Regulatory Research, Division of Systems Research: MACCS Chronic Exposure Pathway Models: Review the chronic exposure pathway models implemented in the MELCOR Accident Consequence Code System (MACCS) and compare those models to the chronic exposure pathway models implemented in similar codes developed in countries that are members of the OECD. The chronic exposures concerned are via: the terrestrial food pathways, the water pathways, the long-term groundshine pathway, and the inhalation of resuspended radionuclides pathway. The USNRC has indicated during discussions of the task that the major effort should be spent on the terrestrial food pathways. There is one chapter for each of the categories of chronic exposure pathways listed above.

  14. Econometric models for predicting confusion crop ratios

    NASA Technical Reports Server (NTRS)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  15. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  16. Demonstrating the improvement of predictive maturity of a computational model

    SciTech Connect

    Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  17. National Ignition Facility: Impacts of chemical accidents and comparison of chemical and radiological accident approaches

    SciTech Connect

    Lazaro, M.A.; Policastro, A.J.; Rhodes, M.F.

    1996-12-31

    An environmental assessment was conducted to estimate potential impacts or consequences associated with constructing and operating the proposed National Ignition Facility (NIF). The multidisciplinary assessment covered topics ranging from radiological and chemical health and safety to socioeconomic and land-use issues. The impacts of five chemical accidents that could occur at NIF are compared, and the extent of their consequences for workers and off-site populations are discussed. Each of the five accident scenarios was modeled by a chemical release and dispersion model with a toxicological criterion for evaluating potential irreversible human health effects. Results show that most of the chemical release scenarios considered will not impair the general public in taking protective actions in the event of an accidental release. The two exceptions are the mercury release (equipment failure) scenarios for the conceptual design and the enhanced design. In general, the predicted maximum threat zones are significantly less than the distance to the point of nearest public access.

  18. A predictive model for biomimetic plate type broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Banerjee, Sourav

    2016-04-01

    In this work, predictive model for a bio-inspired broadband frequency sensor is developed. Broadband frequency sensing is essential in many domains of science and technology. One great example of such sensor is human cochlea, where it senses a frequency band of 20 Hz to 20 KHz. Developing broadband sensor adopting the physics of human cochlea has found tremendous interest in recent years. Although few experimental studies have been reported, a true predictive model to design such sensors is missing. A predictive model is utmost necessary for accurate design of selective broadband sensors that are capable of sensing very selective band of frequencies. Hence, in this study, we proposed a novel predictive model for the cochlea-inspired broadband sensor, aiming to select the frequency band and model parameters predictively. Tapered plate geometry is considered mimicking the real shape of the basilar membrane in the human cochlea. The predictive model is intended to develop flexible enough that can be employed in a wide variety of scientific domains. To do that, the predictive model is developed in such a way that, it can not only handle homogeneous but also any functionally graded model parameters. Additionally, the predictive model is capable of managing various types of boundary conditions. It has been found that, using the homogeneous model parameters, it is possible to sense a specific frequency band from a specific portion (B) of the model length (L). It is also possible to alter the attributes of `B' using functionally graded model parameters, which confirms the predictive frequency selection ability of the developed model.

  19. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  20. A predictive model for Dengue Hemorrhagic Fever epidemics.

    PubMed

    Halide, Halmar; Ridd, Peter

    2008-08-01

    A statistical model for predicting monthly Dengue Hemorrhagic Fever (DHF) cases from the city of Makassar is developed and tested. The model uses past and present DHF cases, climate and meteorological observations as inputs. These inputs are selected using a stepwise regression method to predict future DHF cases. The model is tested independently and its skill assessed using two skill measures. Using the selected variables as inputs, the model is capable of predicting a moderately-severe epidemic at lead times of up to six months. The most important input variable in the prediction is the present number of DHF cases followed by the relative humidity three to four months previously. A prediction 1-6 months in advance is sufficient to initiate various activities to combat DHF epidemic. The model is suitable for warning and easily becomes an operational tool due to its simplicity in data requirement and computational effort.

  1. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  2. Testing the reward prediction error hypothesis with an axiomatic model.

    PubMed

    Rutledge, Robb B; Dean, Mark; Caplin, Andrew; Glimcher, Paul W

    2010-10-01

    Neuroimaging studies typically identify neural activity correlated with the predictions of highly parameterized models, like the many reward prediction error (RPE) models used to study reinforcement learning. Identified brain areas might encode RPEs or, alternatively, only have activity correlated with RPE model predictions. Here, we use an alternate axiomatic approach rooted in economic theory to formally test the entire class of RPE models on neural data. We show that measurements of human neural activity from the striatum, medial prefrontal cortex, amygdala, and posterior cingulate cortex satisfy necessary and sufficient conditions for the entire class of RPE models. However, activity measured from the anterior insula falsifies the axiomatic model, and therefore no RPE model can account for measured activity. Further analysis suggests the anterior insula might instead encode something related to the salience of an outcome. As cognitive neuroscience matures and models proliferate, formal approaches of this kind that assess entire model classes rather than specific model exemplars may take on increased significance.

  3. Prediction of Intracellular Localization of Fluorescent Dyes Using QSAR Models.

    PubMed

    Uchinomiya, Shohei; Horobin, Richard W; Alvarado-Martínez, Enrique; Peña-Cabrera, Eduardo; Chang, Young-Tae

    2016-01-01

    Control of fluorescent dye localization in live cells is crucial for fluorescence imaging. Here, we describe quantitative structure activity relation (QSAR) models for predicting intracellular localization of fluorescent dyes. For generating the QSAR models, electric charge (Z) calculated by pKa, conjugated bond number (CBN), the largest conjugated fragment (LCF), molecular weight (MW) and log P were used as parameters. We identified the intracellular localization of 119 BODIPY dyes in live NIH3T3 cells, and assessed the accuracy of our models by comparing their predictions with the observed dye localizations. As predicted by the models, no BODIPY dyes localized in nuclei or plasma membranes. The accuracy of the model for localization in fat droplets was 92%, with the models for cytosol and lysosomes showing poorer agreement with observed dye localization, albeit well above chance levels. Overall therefore the utility of QSAR models for predicting dye localization in live cells was clearly demonstrated. PMID:27055752

  4. Models Predicting Success of Infertility Treatment: A Systematic Review

    PubMed Central

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  5. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge.

  6. The regional prediction model of PM10 concentrations for Turkey

    NASA Astrophysics Data System (ADS)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  7. Gaussian mixture models as flux prediction method for central receivers

    NASA Astrophysics Data System (ADS)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  8. Comparison of Predictive Models for the Early Diagnosis of Diabetes

    PubMed Central

    Jahani, Meysam

    2016-01-01

    Objectives This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). Results The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. Conclusions The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes. PMID:27200219

  9. A model for prediction of STOVL ejector dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.

  10. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates

  11. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use.

  12. LHC diphoton Higgs signal predicted by little Higgs models

    SciTech Connect

    Wang Lei; Yang Jinmin

    2011-10-01

    Little Higgs theory naturally predicts a light Higgs boson whose most important discovery channel at the LHC is the diphoton signal pp{yields}h{yields}{gamma}{gamma}. In this work, we perform a comparative study for this signal in some typical little Higgs models, namely, the littlest Higgs model, two littlest Higgs models with T-parity (named LHT-I and LHT-II), and the simplest little Higgs models. We find that compared with the standard model prediction, the diphoton signal rate is always suppressed and the suppression extent can be quite different for different models. The suppression is mild (< or approx. 10%) in the littlest Higgs model but can be quite severe ({approx_equal}90%) in other three models. This means that discovering the light Higgs boson predicted by the little Higgs theory through the diphoton channel at the LHC will be more difficult than discovering the standard model Higgs boson.

  13. Light-Weight Radioisotope Heater Unit Safety Analysis Report (LWRHU-SAR). Volume II. Accident model document

    SciTech Connect

    Johnson, E.W.

    1985-10-01

    Purposes of this volume (AMD), are to: Identify all malfunctions, both singular and multiple, which can occur during the complete mission profile that could lead to release outside the clad of the radioisotopic material contained therein; provide estimates of occurrence probabilities associated with these various accidents; evaluate the response of the LWRHU (or its components) to the resultant accident environments; and associate the potential event history with test data or analysis to determine the potential interaction of the released radionuclides with the biosphere.

  14. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  15. Predicting Error Bars for QSAR Models

    SciTech Connect

    Schroeter, Timon; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  16. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  17. Aggregate driver model to enable predictable behaviour

    NASA Astrophysics Data System (ADS)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  18. Validating predictions from climate envelope models

    USGS Publications Warehouse

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  19. Validating Predictions from Climate Envelope Models

    PubMed Central

    Watling, James I.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species. PMID

  20. Prediction Model for Gastric Cancer Incidence in Korean Population

    PubMed Central

    Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Background Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Method Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell’s C-statistics, and the calibration was evaluated using a calibration plot and slope. Results During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). Conclusions In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance. PMID:26186332

  1. Improved analytical model for residual stress prediction in orthogonal cutting

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxu; Li, Bin; Xiong, Liangshan

    2014-09-01

    The analytical model of residual stress in orthogonal cutting proposed by Jiann is an important tool for residual stress prediction in orthogonal cutting. In application of the model, a problem of low precision of the surface residual stress prediction is found. By theoretical analysis, several shortages of Jiann's model are picked out, including: inappropriate boundary conditions, unreasonable calculation method of thermal stress, ignorance of stress constraint and cyclic loading algorithm. These shortages may directly lead to the low precision of the surface residual stress prediction. To eliminate these shortages and make the prediction more accurate, an improved model is proposed. In this model, a new contact boundary condition between tool and workpiece is used to make it in accord with the real cutting process; an improved calculation method of thermal stress is adopted; a stress constraint is added according to the volumeconstancy of plastic deformation; and the accumulative effect of the stresses during cyclic loading is considered. At last, an experiment for measuring residual stress in cutting AISI 1045 steel is conducted. Also, Jiann's model and the improved model are simulated under the same conditions with cutting experiment. The comparisons show that the surface residual stresses predicted by the improved model is closer to the experimental results than the results predicted by Jiann's model.

  2. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Strangman, T. E.; Neumann, J. F.; Liu, A.

    1986-01-01

    Thermal barrier coatings (TBCs) for turbine airfoils in high-performance engines represent an advanced materials technology with both performance and durability benefits. The foremost TBC benefit is the reduction of heat transferred into air-cooled components, which yields performance and durability benefits. This program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant TBC systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY (or CoNiCrAlY) bond coating and an air-plasma-sprayed yttria (8 percent) partially stabilized zirconia insulative layer, is applied by Chromalloy, Klock, and Union Carbide. The second type of TBC is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal.

  3. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  4. Predictive error analysis for a water resource management model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  5. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  6. Predictive model for ozone concentration during MIG welding

    SciTech Connect

    Blehm, K.D.

    1982-01-01

    Particular metal-inert gas (MIG) welding operations have been shown to produce ozone concentrations from 0.2 to 14.5 part per million (ppM) near the arc in a region that may include the welder's breathing zone. Exposure to such concentrations may induce acute or chronic deleterious health effects in the exposed population. Previously reported differences and difficulties with measurement of ozone have produced widely divergent health hazard assessments for ozone exposure during similar welding operations. Therefore, it was desirable to attempt to develop a predictive model for ozone exposure that could be used independent of or concomitant with other measurement and analytical methods. A select MIG welding operation (mild steel) was simulated in field trials where all parameters that could affect ozone concentrations were held constant except for welding amperage and ventilation rate. Techniques of multiple regression analysis were employed to develop a predictive model for ozone concentration based upon factors of amperage and ventilation. The model developed was then evaluated to determine if reasonable, accurate predictions of ozone concentrations could be made. It was determined that the predictive model developed did not yield accurate predictions of ozone concentrations due to uncontrollable variability in the welding process. A very good prediction correlation (r = 0.8387) was, however, established by using amperage and ventilation to predict concentrations of ozone. Potential utility of the model in field situations is discussed, and future research to improve the prediction accuracy is suggested.

  7. Prediction of Warfarin Dose in Pediatric Patients: An Evaluation of the Predictive Performance of Several Models

    PubMed Central

    Marek, Elizabeth; Momper, Jeremiah D.; Hines, Ronald N.; Takao, Cheryl M.; Gill, Joan C.; Pravica, Vera; Gaedigk, Andrea; Neville, Kathleen A.

    2016-01-01

    OBJECTIVES: The objective of this study was to evaluate the performance of pediatric pharmacogenetic-based dose prediction models by using an independent cohort of pediatric patients from a multicenter trial. METHODS: Clinical and genetic data (CYP2C9 [cytochrome P450 2C9] and VKORC1 [vitamin K epoxide reductase]) were collected from pediatric patients aged 3 months to 17 years who were receiving warfarin as part of standard care at 3 separate clinical sites. The accuracy of 8 previously published pediatric pharmacogenetic-based dose models was evaluated in the validation cohort by comparing predicted maintenance doses to actual stable warfarin doses. The predictive ability was assessed by using the proportion of variance (R2), mean prediction error (MPE), and the percentage of predictions that fell within 20% of the actual maintenance dose. RESULTS: Thirty-two children reached a stable international normalized ratio and were included in the validation cohort. The pharmacogenetic-based warfarin dose models showed a proportion of variance ranging from 35% to 78% and an MPE ranging from −2.67 to 0.85 mg/day in the validation cohort. Overall, the model developed by Hamberg et al showed the best performance in the validation cohort (R2 = 78%; MPE = 0.15 mg/day) with 38% of the predictions falling within 20% of observed doses. CONCLUSIONS: Pharmacogenetic-based algorithms provide better predictions than a fixed-dose approach, although an optimal dose algorithm has not yet been developed. PMID:27453700

  8. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  9. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  10. A color prediction model for imagery analysis

    NASA Technical Reports Server (NTRS)

    Skaley, J. E.; Fisher, J. R.; Hardy, E. E.

    1977-01-01

    A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.

  11. Predictive modeling of pedestal structure in KSTAR using EPED model

    SciTech Connect

    Han, Hyunsun; Kim, J. Y.; Kwon, Ohjin

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  12. A Predictive Model of High Shear Thrombus Growth.

    PubMed

    Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N

    2016-08-01

    The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions.

  13. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  14. Ensemble Learning of QTL Models Improves Prediction of Complex Traits.

    PubMed

    Bian, Yang; Holland, James B

    2015-10-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  15. A multicentre Spanish study for multivariate prediction of perioperative in-hospital cerebrovascular accident after coronary bypass surgery: the PACK2 score†

    PubMed Central

    Hornero, Fernando; Martín, Elio; Rodríguez, Rafael; Castellà, Manel; Porras, Carlos; Romero, Bernat; Maroto, Luis; Pérez De La Sota, Enrique

    2013-01-01

    OBJECTIVES To develop a multivariate predictive risk score of perioperative in-hospital stroke after coronary artery bypass grafting (CABG) surgery. METHOD A total of 26 347 patients were enrolled from 21 Spanish hospital databases. Logistic regression analysis was used to predict the risk of perioperative stroke (ictus or transient ischaemic attack). The predictive scale was developed from a training set of data and validated by an independent test set, both selected randomly. The assessment of the accuracy of prediction was related to the area under the ROC curve. The variables considered were: preoperative (age, gender, diabetes mellitus, arterial hypertension, previous stroke, cardiac failure and/or left ventricular ejection fraction <40%, non-elective priority of surgery, extracardiac arteriopathy, chronic kidney failure and/or creatininemia ≥2 mg/dl and atrial fibrillation) and intraoperative (on/off-pump). RESULTS Global perioperative stroke incidence was 1.38%. Non-elective priority of surgery (priority; OR = 2.32), vascular disease (arteriopathy; OR = 1.37), cardiac failure (cardiac; OR = 3.64) and chronic kidney failure (kidney; OR = 6.78) were found to be independent risk factors for perioperative stroke in uni- and multivariate models in the training set of data; P < 0.0001; AUC = 0.77, 95% CI 0.73–0.82. The PACK2 stroke CABG score was established with 1 point for each item, except for chronic kidney failure with 2 points (range 0–5 points); AUC = 0.76, 95% CI 0.72–0.80. In patients with PACK2 score ≥2 points, off-pump reduced perioperative stoke incidence by 2.3% when compared with on-pump CABG. CONCLUSIONS PACK2 risk scale shows good predictive accuracy in the data analysed and could be useful in clinical practice for decision making and patient selection. PMID:23628652

  16. Test Data for USEPR Severe Accident Code Validation

    SciTech Connect

    J. L. Rempe

    2007-05-01

    This document identifies data that can be used for assessing various models embodied in severe accident analysis codes. Phenomena considered in this document, which were limited to those anticipated to be of interest in assessing severe accidents in the USEPR developed by AREVA, include: • Fuel Heatup and Melt Progression • Reactor Coolant System (RCS) Thermal Hydraulics • In-Vessel Molten Pool Formation and Heat Transfer • Fuel/Coolant Interactions during Relocation • Debris Heat Loads to the Vessel • Vessel Failure • Molten Core Concrete Interaction (MCCI) and Reactor Cavity Plug Failure • Melt Spreading and Coolability • Hydrogen Control Each section of this report discusses one phenomenon of interest to the USEPR. Within each section, an effort is made to describe the phenomenon and identify what data are available modeling it. As noted in this document, models in US accident analysis codes (MAAP, MELCOR, and SCDAP/RELAP5) differ. Where possible, this report identifies previous assessments that illustrate the impact of modeling differences on predicting various phenomena. Finally, recommendations regarding the status of data available for modeling USEPR severe accident phenomena are summarized.

  17. A general approach to critical infrastructure accident consequences analysis

    NASA Astrophysics Data System (ADS)

    Bogalecka, Magda; Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    2016-06-01

    The probabilistic general model of critical infrastructure accident consequences including the process of the models of initiating events generated by its accident, the process of environment threats and the process of environment degradation is presented.

  18. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  19. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  20. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  1. A burnout prediction model based around char morphology

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  2. Evaluation of prediction intervals for expressing uncertainties in groundwater flow model predictions

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.We tested the accuracy of 95% individual prediction intervals for hydraulic heads, streamflow gains, and effective transmissivities computed by groundwater models of two Danish aquifers. To compute the intervals, we assumed that each predicted value can be written as the sum of a computed dependent variable and a random error. Testing was accomplished by using a cross-validation method and by using new field measurements of hydraulic heads and transmissivities that were not used to develop or calibrate the models. The tested null hypotheses are that the coverage probability of the prediction intervals is not significantly smaller than the assumed probability (95%) and that each tail probability is not significantly different from the assumed probability (2.5%). In all cases tested, these hypotheses were accepted at the 5% level of significance. We therefore conclude that for the groundwater models of two real aquifers the individual prediction intervals appear to be accurate.

  3. Coupled Air-Sea Observations and Modeling for Better Understanding Tropical Cyclone Prediction and Predictability

    NASA Astrophysics Data System (ADS)

    Chen, S. S.

    2014-12-01

    A systematic observational and modeling study is conducted to better understand the physical processes controlling air-sea interaction and their impact on tropical cyclone (TC) prediction and predictability using a fully coupled atmosphere-wave-ocean modeling system developed at the University of Miami and observations from field campaigns. We have developed a unified air-sea interface module that couples multiple atmosphere, wave, and ocean models using the Earth System Modeling Framework (ESMF). It is a physically based and computationally efficient coupling system that is flexible to use in a multi-model system and portable for transition to the next generation research and operational coupled atmosphere-wave-ocean-land models. This standardized coupling framework allows researchers to develop and test air-sea coupling parameterizations and coupled data assimilation, and to better facilitate research-to-operation activities. It also allows for ensemble forecasts that can be used for coupled atmosphere-ocean data assimilation and assessment of uncertainties in coupled model predictions. The coupled modeling system has been evaluated using the coupled air-sea observations (e.g., GPS dropsondes and AXBTs, ocean drifters and floats) collected in recent field campaigns in the Gulf of Mexico and TCs in the Atlantic and Pacific basins. This talk will provide 1) an overview of the unified air-sea interface model, 2) fully coupled atmosphere-wave-ocean model predictions of TCs and evaluation with coupled air-sea observations, and 3) results from high-resolution (1.3 km grid resolution) ensemble experiments using a stochastic kinetic energy backscatter (SKEB) perturbation method to assess the predictability and uncertainty in TC predictions.

  4. A cross-scale numerical modeling system for management support of oil spill accidents.

    PubMed

    Azevedo, Alberto; Oliveira, Anabela; Fortunato, André B; Zhang, Joseph; Baptista, António M

    2014-03-15

    A flexible 2D/3D oil spill modeling system addressing the distinct nature of the surface and water column fluids, major oil weathering and improved retention/reposition processes in coastal zones is presented. The system integrates hydrodynamic, transport and oil weathering modules, which can be combined to offer different-complexity descriptions as required by applications across the river-to-ocean continuum. Features include accounting for different composition and reology in the surface and water column mixtures, as well as spreading, evaporation, water-in-oil emulsification, shoreline retention, dispersion and dissolution. The use of unstructured grids provides flexibility and efficiency in handling spills in complex geometries and across scales. The use of high-order Eulerian-Lagrangian methods allows for computational efficiency and for handling key processes in ways consistent with their distinct mathematical nature and time scales. The modeling system is tested through a suite of synthetic, laboratory and realistic-domain benchmarks, which demonstrate robust handling of key processes and of 2D/3D couplings. The application of the modeling system to a spill scenario at the entrance of a port in a coastal lagoon illustrates the power of the approach to represent spills that occur in coastal regions with complex boundaries and bathymetry.

  5. Computer Model for Prediction of PCB Dechlorination and Biodegradation Endpoints

    SciTech Connect

    Just, E.M.; Klasson, T.

    1999-04-19

    Mathematical modeling of polychlorinated biphenyl (PCB) transformation served as a means of predicting possible endpoints of bioremediation, thus allowing evaluation of several of the most common transformation patterns. Correlation between laboratory-observed and predicted endpoint data was, in some cases, as good as 0.98 (perfect correlation = 1.0).

  6. A Model for Prediction of Heat Stability of Photosynthetic Membranes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A previous study has revealed a positive correlation between heat-induced damage to photosynthetic membranes (thylakoid membranes) and chlorophyll loss. In this study, we exploited this correlation and developed a model for prediction of thermal damage to thylakoids. Prediction is based on estimat...

  7. Reconnection in NIMROD: Model, Predictions, Remedies

    SciTech Connect

    Fowler, T K; Bulmer, R H; Cohen, B I; Hau, D D

    2003-06-25

    It is shown that in NIMROD the formation of closed current configurations, occurring only after the voltage is turned off, is due to the faster resistive decay of nonsymmetric modes compared to the symmetric projection of the 3D steady state achieved by gun injection. Implementing Spitzer resistivity is required to make a definitive comparison with experiment, using two experimental signatures of the model discussed in the paper. If there are serious disagreements, it is suggested that a phenomenological hyper-resistivity be added to the n = 0 component of Ohm's law, similar to hyper-resistive Corsica models that appear to fit experiments. Hyper-resistivity might capture physics at small scale missed by NIMROD. Encouraging results would motivate coupling NIMROD to SPICE with edge physics inspired by UEDGE, as a tool for experimental data analysis.

  8. Droplet-model predictions of charge moments

    SciTech Connect

    Myers, W.D.

    1982-04-01

    The Droplet Model expressions for calculating various moments of the nuclear charge distribution are given. There are contributions to the moments from the size and shape of the system, from the internal redistribution induced by the Coulomb repulsion, and from the diffuseness of the surface. A case is made for the use of diffuse charge distributions generated by convolution as an alternative to Fermi-functions.

  9. Drinking, Driving, and Crashing: A Traffic-Flow Model of Alcohol-Related Motor Vehicle Accidents*

    PubMed Central

    Gruenewald, Paul J.; Johnson, Fred W.

    2010-01-01

    Objective: This study examined the influence of on-premise alcohol-outlet densities and of drinking-driver densities on rates of alcohol-related motor vehicle crashes. A traffic-flow model is developed to represent geographic relationships between residential locations of drinking drivers, alcohol outlets, and alcohol-related motor vehicle crashes. Method: Cross-sectional and time-series cross-sectional spatial analyses were performed using data collected from 144 geographic units over 4 years. Data were obtained from archival and survey sources in six communities. Archival data were obtained within community areas and measured activities of either the resident population or persons visiting these communities. These data included local and highway traffic flow, locations of alcohol outlets, population density, network density of the local roadway system, and single-vehicle nighttime (SVN) crashes. Telephone-survey data obtained from residents of the communities were used to estimate the size of the resident drinking and driving population. Results: Cross-sectional analyses showed that effects relating on-premise densities to alcohol-related crashes were moderated by highway traffic flow. Depending on levels of highway traffic flow, 10% greater densities were related to 0% to 150% greater rates of SVN crashes. Time-series cross-sectional analyses showed that changes in the population pool of drinking drivers and on-premise densities interacted to increase SVN crash rates. Conclusions: A simple traffic-flow model can assess the effects of on-premise alcohol-outlet densities and of drinking-driver densities as they vary across communities to produce alcohol-related crashes. Analyses based on these models can usefully guide policy decisions on the siting of on-premise alcohol outlets. PMID:20230721

  10. Testing the Predictions of the Central Capacity Sharing Model

    ERIC Educational Resources Information Center

    Tombu, Michael; Jolicoeur, Pierre

    2005-01-01

    The divergent predictions of 2 models of dual-task performance are investigated. The central bottleneck and central capacity sharing models argue that a central stage of information processing is capacity limited, whereas stages before and after are capacity free. The models disagree about the nature of this central capacity limitation. The…

  11. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.

  12. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people. PMID:23047079

  13. Downscaling surface wind predictions from numerical weather prediction models in complex terrain with WindNinja

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, Natalie S.; Forthofer, Jason M.; Lamb, Brian K.; Shannon, Kyle S.; Butler, Bret W.

    2016-04-01

    Wind predictions in complex terrain are important for a number of applications. Dynamic downscaling of numerical weather prediction (NWP) model winds with a high-resolution wind model is one way to obtain a wind forecast that accounts for local terrain effects, such as wind speed-up over ridges, flow channeling in valleys, flow separation around terrain obstacles, and flows induced by local surface heating and cooling. In this paper we investigate the ability of a mass-consistent wind model for downscaling near-surface wind predictions from four NWP models in complex terrain. Model predictions are compared with surface observations from a tall, isolated mountain. Downscaling improved near-surface wind forecasts under high-wind (near-neutral atmospheric stability) conditions. Results were mixed during upslope and downslope (non-neutral atmospheric stability) flow periods, although wind direction predictions generally improved with downscaling. This work constitutes evaluation of a diagnostic wind model at unprecedented high spatial resolution in terrain with topographical ruggedness approaching that of typical landscapes in the western US susceptible to wildland fire.

  14. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  15. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.

    PubMed

    Song, Jingwei; He, Jiaying

    2014-08-01

    In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models.

  16. Using Pareto points for model identification in predictive toxicology.

    PubMed

    Palczewska, Anna; Neagu, Daniel; Ridley, Mick

    2013-03-22

    : Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.

  17. New Model Predicts Fire Activity in South America

    NASA Video Gallery

    UC Irvine scientist Jim Randerson discusses a new model that is able to predict fire activity in South America using sea surface temperature observations of the Pacific and Atlantic Ocean. The find...

  18. Prospective evaluation of a Bayesian model to predict organizational change.

    PubMed

    Molfenter, Todd; Gustafson, Dave; Kilo, Chuck; Bhattacharya, Abhik; Olsson, Jesper

    2005-01-01

    This research examines a subjective Bayesian model's ability to predict organizational change outcomes and sustainability of those outcomes for project teams participating in a multi-organizational improvement collaborative. PMID:16093893

  19. Submission Form for Peer-Reviewed Cancer Risk Prediction Models

    Cancer.gov

    If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.

  20. Predicting recurrence after unprovoked venous thromboembolism: prospective validation of the updated Vienna Prediction Model.

    PubMed

    Tritschler, Tobias; Méan, Marie; Limacher, Andreas; Rodondi, Nicolas; Aujesky, Drahomir

    2015-10-15

    The updated Vienna Prediction Model for estimating recurrence risk after an unprovoked venous thromboembolism (VTE) has been developed to identify individuals at low risk for VTE recurrence in whom anticoagulation (AC) therapy may be stopped after 3 months. We externally validated the accuracy of the model to predict recurrent VTE in a prospective multicenter cohort of 156 patients aged ≥65 years with acute symptomatic unprovoked VTE who had received 3 to 12 months of AC. Patients with a predicted 12-month risk within the lowest quartile based on the updated Vienna Prediction Model were classified as low risk. The risk of recurrent VTE did not differ between low- vs higher-risk patients at 12 months (13% vs 10%; P = .77) and 24 months (15% vs 17%; P = 1.0). The area under the receiver operating characteristic curve for predicting VTE recurrence was 0.39 (95% confidence interval [CI], 0.25-0.52) at 12 months and 0.43 (95% CI, 0.31-0.54) at 24 months. In conclusion, in elderly patients with unprovoked VTE who have stopped AC, the updated Vienna Prediction Model does not discriminate between patients who develop recurrent VTE and those who do not. This study was registered at www.clinicaltrials.gov as #NCT00973596.

  1. Application of chaotic prediction model based on wavelet transform on water quality prediction

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zou, Z. H.; Zhao, Y. F.

    2016-08-01

    Dissolved oxygen (DO) is closely related to water self-purification capacity. In order to better forecast its concentration, the chaotic prediction model, based on the wavelet transform, is proposed and applied to a certain monitoring section of the Mentougou area of the Haihe River Basin. The result is compared with the simple application of the chaotic prediction model. The study indicates that the new model aligns better with the real data and has a higher accuracy. Therefore, it will provide significant decision support for water protection and water environment treatment.

  2. Updated verification of the Space Weather Prediction Center's solar energetic particle prediction model

    NASA Astrophysics Data System (ADS)

    Balch, Christopher C.

    2008-01-01

    This paper evaluates the performance of an operational proton prediction model currently being used at NOAA's Space Weather Prediction Center. The evaluation is based on proton events that occurred between 1986 and 2004. Parameters for the associated solar events determine a set of necessary conditions, which are used to construct a set of control events. Model output is calculated for these events and performance of the model is evaluated using standard verification measures. For probability forecasts we evaluate the accuracy, reliability, and resolution and display these results using a standard attributes diagram. We identify conditions for which the model is systematically inaccurate. The probability forecasts are also evaluated for categorical forecast performance measures. We find an optimal probability and we calculate the false alarm rate and probability of detection at this probability. We also show results for peak flux and rise time predictions. These findings provide an objective basis for measuring future improvements.

  3. Developing a predictive tropospheric ozone model for Tabriz

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi

    2013-04-01

    Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.

  4. Prediction of cloud droplet number in a general circulation model

    SciTech Connect

    Ghan, S.J.; Leung, L.R.

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  5. Launch ascent guidance by discrete multi-model predictive control

    NASA Astrophysics Data System (ADS)

    Vachon, Alexandre; Desbiens, André; Gagnon, Eric; Bérard, Caroline

    2014-02-01

    This paper studies the application of discrete multi-model predictive control as a trajectory tracking guidance law for a space launcher. Two different algorithms are developed, each one based on a different representation of launcher translation dynamics. These representations are based on an interpolation of the linear approximation of nonlinear pseudo-five degrees of freedom equations of translation around an elliptical Earth. The interpolation gives a linear-time-varying representation and a linear-fractional representation. They are used as the predictive model of multi-model predictive controllers. The controlled variables are the orbital parameters, and constraints on a terminal region for the minimal accepted precision are also included. Use of orbital parameters as the controlled variables allows for a partial definition of the trajectory. Constraints can also be included in multi-model predictive control to reduce the number of unknowns of the problem by defining input shaping constraints. The guidance algorithms are tested in nominal conditions and off-nominal conditions with uncertainties on the thrust. The results are compared to those of a similar formulation with a nonlinear model predictive controller and to a guidance method based on the resolution of a simplified version of the two-point boundary value problem. In nominal conditions, the model predictive controllers are more precise and produce a more optimal trajectory but are longer to compute than the two-point boundary solution. Moreover, in presence of uncertainties, developed algorithms exhibit poor robustness properties. The multi-model predictive control algorithms do not reach the desired orbit while the nonlinear model predictive control algorithm still converges but produces larger maneuvers than the other method.

  6. Toward a predictive model for elastomer seals

    NASA Astrophysics Data System (ADS)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  7. Empirical Model for Predicting Rockfall Trajectory Direction

    NASA Astrophysics Data System (ADS)

    Asteriou, Pavlos; Tsiambaos, George

    2016-03-01

    A methodology for the experimental investigation of rockfall in three-dimensional space is presented in this paper, aiming to assist on-going research of the complexity of a block's response to impact during a rockfall. An extended laboratory investigation was conducted, consisting of 590 tests with cubical and spherical blocks made of an artificial material. The effects of shape, slope angle and the deviation of the post-impact trajectory are examined as a function of the pre-impact trajectory direction. Additionally, an empirical model is proposed that estimates the deviation of the post-impact trajectory as a function of the pre-impact trajectory with respect to the slope surface and the slope angle. This empirical model is validated by 192 small-scale field tests, which are also presented in this paper. Some important aspects of the three-dimensional nature of rockfall phenomena are highlighted that have been hitherto neglected. The 3D space data provided in this study are suitable for the calibration and verification of rockfall analysis software that has become increasingly popular in design practice.

  8. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation: Part 2, Scientific bases for health effects models

    SciTech Connect

    Abrahamson, S.; Bender, M.; Book, S.; Buncher, C.; Denniston, C.; Gilbert, E.; Hahn, F.; Hertzberg, V.; Maxon, H.; Scott, B.

    1989-05-01

    This report provides dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Two-parameter Weibull hazard functions are recommended for estimating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary and gastrointestinal syndromes -- are considered. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid and ''other''. The category, ''other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also provided. For most cancers, both incidence and mortality are addressed. Linear and linear-quadratic models are also recommended for assessing genetic risks. Five classes of genetic disease -- dominant, x-linked, aneuploidy, unbalanced translocation and multifactorial diseases --are considered. In addition, the impact of radiation-induced genetic damage on the incidence of peri-implantation embryo losses is discussed. The uncertainty in modeling radiological health risks is addressed by providing central, upper, and lower estimates of all model parameters. Data are provided which should enable analysts to consider the timing and severity of each type of health risk. 22 refs., 14 figs., 51 tabs.

  9. Modelling of Ceres: Predictions for Dawn

    NASA Astrophysics Data System (ADS)

    Neumann, Wladimir; Breuer, Doris; Spohn, Tilman

    2014-05-01

    Introduction: The asteroid Ceres is the largest body in the asteroid belt. It can be seen as one of the remaining examples of the intermediate stages of planetary accretion, which additionally is substantially different from most asteroids. Studies of such protoplanetary objects like Ceres and Vesta provide insight into the history of the formation of Earth and other rocky planets. One of Ceres' remarkable properties is the relatively low average bulk density of 2077±36 kg m-3 (see [1]). Assuming a nearly chondritic composition, this low value can be explained either by a relatively high average porosity[2], or by the presence of a low density phase[3,4]. Based on numerical modelling[3,4], it has been proposed that this low density phase, which may have been represented initially by water ice or by hydrated silicates, differentiated from the silicates forming an icy mantle overlying a rocky core. However, the shape and the moment of inertia of Ceres seem to be consistent with both a porous and a differentiated structure. In the first case Ceres would be just a large version of a common asteroid. In the second case, however, this body could exhibit properties characteristic for a planet rather than an asteroid: presence of a core, mantle and crust, as well as a liquid ocean in the past and maybe still a thin basal ocean today. This issue is still under debate, but will be resolved (at least partially), once Dawn orbits Ceres. We study the thermal evolution of a Ceres-like body via numerical modeling in order to draw conclusions about the thermal metamorphism of the interior and its present-day state. In particular, we investigate the evolution of the interior assuming an initially porous structure. We adopted the numerical code from [5], which computes the thermal and structural evolution of planetesimals, including compaction of the initially porous primordial material, which is modeled using a creep law. Our model is suited to prioritise between the two possible

  10. Radiation protection issues on preparedness and response for a severe nuclear accident: experiences of the Fukushima accident.

    PubMed

    Homma, T; Takahara, S; Kimura, M; Kinase, S

    2015-06-01

    Radiation protection issues on preparedness and response for a severe nuclear accident are discussed in this paper based on the experiences following the accident at Fukushima Daiichi nuclear power plant. The criteria for use in nuclear emergencies in the Japanese emergency preparedness guide were based on the recommendations of International Commission of Radiological Protection (ICRP) Publications 60 and 63. Although the decision-making process for implementing protective actions relied heavily on computer-based predictive models prior to the accident, urgent protective actions, such as evacuation and sheltering, were implemented effectively based on the plant conditions. As there were no recommendations and criteria for long-term protective actions in the emergency preparedness guide, the recommendations of ICRP Publications 103, 109, and 111 were taken into consideration in determining the temporary relocation of inhabitants of heavily contaminated areas. These recommendations were very useful in deciding the emergency protective actions to take in the early stages of the Fukushima accident. However, some suggestions have been made for improving emergency preparedness and response in the early stages of a severe nuclear accident.

  11. Radiation protection issues on preparedness and response for a severe nuclear accident: experiences of the Fukushima accident.

    PubMed

    Homma, T; Takahara, S; Kimura, M; Kinase, S

    2015-06-01

    Radiation protection issues on preparedness and response for a severe nuclear accident are discussed in this paper based on the experiences following the accident at Fukushima Daiichi nuclear power plant. The criteria for use in nuclear emergencies in the Japanese emergency preparedness guide were based on the recommendations of International Commission of Radiological Protection (ICRP) Publications 60 and 63. Although the decision-making process for implementing protective actions relied heavily on computer-based predictive models prior to the accident, urgent protective actions, such as evacuation and sheltering, were implemented effectively based on the plant conditions. As there were no recommendations and criteria for long-term protective actions in the emergency preparedness guide, the recommendations of ICRP Publications 103, 109, and 111 were taken into consideration in determining the temporary relocation of inhabitants of heavily contaminated areas. These recommendations were very useful in deciding the emergency protective actions to take in the early stages of the Fukushima accident. However, some suggestions have been made for improving emergency preparedness and response in the early stages of a severe nuclear accident. PMID:25915551

  12. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  13. Models for predicting recreational water quality at Lake Erie beaches

    USGS Publications Warehouse

    Francy, Donna S.; Darner, Robert A.; Bertke, Erin E.

    2006-01-01

    Data collected from four Lake Erie beaches during the recreational seasons of 2004-05 and from one Lake Erie beach during 2000-2005 were used to develop predictive models for recreational water quality by means of multiple linear regression. The best model for each beach was based on a unique combination of environmental and water-quality explanatory variables including turbidity, rainfall, wave height, water temperature, day of the year, wind direction, and lake level. Two types of outputs were produced from the models: the predicted Escherichia coli concentration and the probability that the bathing-water standard will be exceeded. The model for one of beaches, Huntington Reservation (Huntington), was validated in 2005. For 2005, the Huntington model yielded more correct responses and better predicted exceedance of the standard than did current methods for assessing recreational water quality, which are based on the previous day's E. coli concentration. Predictions based on the Huntington model have been available to the public through an Internet-based 'nowcasting' system since May 30, 2006. The other beach models are being validated for the first time in 2006. The methods used in this study to develop and test predictive models can be applied at other similar coastal beaches.

  14. Using a Prediction Model to Manage Cyber Security Threats.

    PubMed

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  15. Using a Prediction Model to Manage Cyber Security Threats.

    PubMed

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. Using a Prediction Model to Manage Cyber Security Threats

    PubMed Central

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  17. Predictive modeling of neuroanatomic structures for brain atrophy detection

    NASA Astrophysics Data System (ADS)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  18. Development and application of chronic disease risk prediction models.

    PubMed

    Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang

    2014-07-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.

  19. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    PubMed Central

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  20. Model predictive torque control with an extended prediction horizon for electrical drive systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengxiang; Zhang, Zhenbin; Kennel, Ralph; Rodríguez, José

    2015-07-01

    This paper presents a model predictive torque control method for electrical drive systems. A two-step prediction horizon is achieved by considering the reduction of the torque ripples. The electromagnetic torque and the stator flux error between predicted values and the references, and an over-current protection are considered in the cost function design. The best voltage vector is selected by minimising the value of the cost function, which aims to achieve a low torque ripple in two intervals. The study is carried out experimentally. The results show that the proposed method achieves good performance in both steady and transient states.

  1. Impact of modellers' decisions on hydrological a priori predictions

    NASA Astrophysics Data System (ADS)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2013-07-01

    The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing

  2. A highly predictable animal model of retinoblastoma.

    PubMed

    Kobayashi, M; Mukai, N; Solish, S P; Pomeroy, M E

    1982-01-01

    A new animal model of retinoblastoma was developed in newborn inbred CDF rats by intravitreous inoculation of retinal tumor cells (5 X 10(4)/5 microliter) derived from the cultured tumor cell line EXP-5. The retinal tumor from which the cell line originated was induced by a single intravitreous inoculation of human adenovirus serotype 12 (5 microliter of 10(8) TCID 50/0.1 ml) in syngeneic rats. Within 1 month after intravitreous inoculation of EXP-5 cells, a clinically recognizable ocular tumor was obtained in all 39 rats. Ad 12-specific T-antigens were demonstrated in the cultured tumor cells using immunofluorescent techniques. Morphologically these tumor cells closely resembled retinoblastoma, with poorly differentiated intracytoplasmic organelles, solitary cilia with a 9 + 0 tubule pattern, and abnormal nuclear membrane associated with a set of basal bodies. The significance of this highly manipulable retinal tumor cell line is the capability of providing a full-fledged intravitreous tumor in 1-month-old CDF rats, whose actual life span is known to be 42 months. Transplantable retinal tumors described to date are reviewed briefly and compared with the presently reported cell line.

  3. Global and local cancer risks after the Fukushima Nuclear Power Plant accident as seen from Chernobyl: a modeling study for radiocaesium ((134)Cs &(137)Cs).

    PubMed

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-03-01

    The accident at the Fukushima Daiichi Nuclear Power Plant (NPP) in Japan resulted in the release of a large number of fission products that were transported worldwide. We study the effects of two of the most dangerous radionuclides emitted, (137)Cs (half-life: 30.2years) and (134)Cs (half-life: 2.06years), which were transported across the world constituting the global fallout (together with iodine isotopes and noble gasses) after nuclear releases. The main purpose is to provide preliminary cancer risk estimates after the Fukushima NPP accident, in terms of excess lifetime incident and death risks, prior to epidemiology, and compare them with those occurred after the Chernobyl accident. Moreover, cancer risks are presented for the local population in the form of high-resolution risk maps for 3 population classes and for both sexes. The atmospheric transport model LMDZORINCA was used to simulate the global dispersion of radiocaesium after the accident. Air and ground activity concentrations have been incorporated with monitoring data as input to the LNT-model (Linear Non-Threshold) frequently used in risk assessments of all solid cancers. Cancer risks were estimated to be small for the global population in regions outside Japan. Women are more sensitive to radiation than men, although the largest risks were recorded for infants; the risk is not depended on the sex at the age-at-exposure. Radiation risks from Fukushima were more enhanced near the plant, while the evacuation measures were crucial for its reduction. According to our estimations, 730-1700 excess cancer incidents are expected of which around 65% may be fatal, which are very close to what has been already published (see references therein). Finally, we applied the same calculations using the DDREF (Dose and Dose Rate Effectiveness Factor), which is recommended by the ICRP, UNSCEAR and EPA as an alternative reduction factor instead of using a threshold value (which is still unknown). Excess lifetime cancer

  4. Global and local cancer risks after the Fukushima Nuclear Power Plant accident as seen from Chernobyl: a modeling study for radiocaesium ((134)Cs &(137)Cs).

    PubMed

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-03-01

    The accident at the Fukushima Daiichi Nuclear Power Plant (NPP) in Japan resulted in the release of a large number of fission products that were transported worldwide. We study the effects of two of the most dangerous radionuclides emitted, (137)Cs (half-life: 30.2years) and (134)Cs (half-life: 2.06years), which were transported across the world constituting the global fallout (together with iodine isotopes and noble gasses) after nuclear releases. The main purpose is to provide preliminary cancer risk estimates after the Fukushima NPP accident, in terms of excess lifetime incident and death risks, prior to epidemiology, and compare them with those occurred after the Chernobyl accident. Moreover, cancer risks are presented for the local population in the form of high-resolution risk maps for 3 population classes and for both sexes. The atmospheric transport model LMDZORINCA was used to simulate the global dispersion of radiocaesium after the accident. Air and ground activity concentrations have been incorporated with monitoring data as input to the LNT-model (Linear Non-Threshold) frequently used in risk assessments of all solid cancers. Cancer risks were estimated to be small for the global population in regions outside Japan. Women are more sensitive to radiation than men, although the largest risks were recorded for infants; the risk is not depended on the sex at the age-at-exposure. Radiation risks from Fukushima were more enhanced near the plant, while the evacuation measures were crucial for its reduction. According to our estimations, 730-1700 excess cancer incidents are expected of which around 65% may be fatal, which are very close to what has been already published (see references therein). Finally, we applied the same calculations using the DDREF (Dose and Dose Rate Effectiveness Factor), which is recommended by the ICRP, UNSCEAR and EPA as an alternative reduction factor instead of using a threshold value (which is still unknown). Excess lifetime cancer

  5. Groundwater Level Prediction using M5 Model Trees

    NASA Astrophysics Data System (ADS)

    Nalarajan, Nitha Ayinippully; Mohandas, C.

    2015-01-01

    Groundwater is an important resource, readily available and having high economic value and social benefit. Recently, it had been considered a dependable source of uncontaminated water. During the past two decades, increased rate of extraction and other greedy human actions have resulted in the groundwater crisis, both qualitatively and quantitatively. Under prevailing circumstances, the availability of predicted groundwater levels increase the importance of this valuable resource, as an aid in the planning of groundwater resources. For this purpose, data-driven prediction models are widely used in the present day world. M5 model tree (MT) is a popular soft computing method emerging as a promising method for numeric prediction, producing understandable models. The present study discusses the groundwater level predictions using MT employing only the historical groundwater levels from a groundwater monitoring well. The results showed that MT can be successively used for forecasting groundwater levels.

  6. A comparison of arcjet plume properties to model predictions

    NASA Technical Reports Server (NTRS)

    Cappelli, M. A.; Liebeskind, J. G.; Hanson, R. K.; Butler, G. W.; King, D. Q.

    1993-01-01

    This paper describes an experimental study of the plasma plume properties of a 1 kW class hydrogen arcjet thruster and the comparison of measured temperature and velocity field to model predictions. The experiments are based on laser-induced fluorescence excitation of the Balmer-alpha transition. The model is based on a single-fluid magnetohydrodynamic description of the flow originally developed to predict arcjet thruster performance. Excellent agreement between model predictions and experimental velocity is found, despite the complex nature of the flow. Measured and predicted exit plane temperatures are in disagreement by as much as 2000K over a range of operating conditions. The possible sources for this discrepancy are discussed.

  7. Modelling proteins' hidden conformations to predict antibiotic resistance

    NASA Astrophysics Data System (ADS)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  8. Predicting waste stabilization pond performance using an ecological simulation model

    SciTech Connect

    New, G.R.

    1987-01-01

    Waste stabilization ponds (lagoons) are often favored in small communities because of their low cost and ease of operation. Most models currently used to predict performance are empirical or fail to address the primary lagoon cell. Empirical methods for predicting lagoon performance have been found to be off as much as 248 percent when used on a system other than the one they were developed for. Also, the present models developed for the primary cell lack the ability to predict parameters other than biochemical oxygen demand (BOD) and nitrogen. Oxygen consumption is usually estimated from BOD utilization. LAGOON is a fortran program which models the biogeochemical processes characteristic of the primary cell of facultative lagoons. Model parameters can be measured from lagoons in the vicinity of a proposed lagoon or estimated from laboratory studies. The model was calibrated utilizing a subset of the Corinne Utah lagoon data then validated utilizing a subset of the Corinne Utah data.

  9. Predicting adverse drug reactions in older adults; a systematic review of the risk prediction models

    PubMed Central

    Stevenson, Jennifer M; Williams, Josceline L; Burnham, Thomas G; Prevost, A Toby; Schiff, Rebekah; Erskine, S David; Davies, J Graham

    2014-01-01

    Adverse drug reaction (ADR) risk-prediction models for use in older adults have been developed, but it is not clear if they are suitable for use in clinical practice. This systematic review aimed to identify and investigate the quality of validated ADR risk-prediction models for use in older adults. Standard computerized databases, the gray literature, bibliographies, and citations were searched (2012) to identify relevant peer-reviewed studies. Studies that developed and validated an ADR prediction model for use in patients over 65 years old, using a multivariable approach in the design and analysis, were included. Data were extracted and their quality assessed by independent reviewers using a standard approach. Of the 13,423 titles identified, only 549 were associated with adverse outcomes of medicines use. Four met the inclusion criteria. All were conducted in inpatient cohorts in Western Europe. None of the models satisfied the four key stages in the creation of a quality risk prediction model; development and validation were completed, but impact and implementation were not assessed. Model performance was modest; area under the receiver operator curve ranged from 0.623 to 0.73. Study quality was difficult to assess due to poor reporting, but inappropriate methods were apparent. Further work needs to be conducted concerning the existing models to enable the development of a robust ADR risk-prediction model that is externally validated, with practical design and good performance. Only then can implementation and impact be assessed with the aim of generating a model of high enough quality to be considered for use in clinical care to prioritize older people at high risk of suffering an ADR. PMID:25278750

  10. Predicting adverse drug reactions in older adults; a systematic review of the risk prediction models.

    PubMed

    Stevenson, Jennifer M; Williams, Josceline L; Burnham, Thomas G; Prevost, A Toby; Schiff, Rebekah; Erskine, S David; Davies, J Graham

    2014-01-01

    Adverse drug reaction (ADR) risk-prediction models for use in older adults have been developed, but it is not clear if they are suitable for use in clinical practice. This systematic review aimed to identify and investigate the quality of validated ADR risk-prediction models for use in older adults. Standard computerized databases, the gray literature, bibliographies, and citations were searched (2012) to identify relevant peer-reviewed studies. Studies that developed and validated an ADR prediction model for use in patients over 65 years old, using a multivariable approach in the design and analysis, were included. Data were extracted and their quality assessed by independent reviewers using a standard approach. Of the 13,423 titles identified, only 549 were associated with adverse outcomes of medicines use. Four met the inclusion criteria. All were conducted in inpatient cohorts in Western Europe. None of the models satisfied the four key stages in the creation of a quality risk prediction model; development and validation were completed, but impact and implementation were not assessed. Model performance was modest; area under the receiver operator curve ranged from 0.623 to 0.73. Study quality was difficult to assess due to poor reporting, but inappropriate methods were apparent. Further work needs to be conducted concerning the existing models to enable the development of a robust ADR risk-prediction model that is externally validated, with practical design and good performance. Only then can implementation and impact be assessed with the aim of generating a model of high enough quality to be considered for use in clinical care to prioritize older people at high risk of suffering an ADR.

  11. Predicting adverse drug reactions in older adults; a systematic review of the risk prediction models.

    PubMed

    Stevenson, Jennifer M; Williams, Josceline L; Burnham, Thomas G; Prevost, A Toby; Schiff, Rebekah; Erskine, S David; Davies, J Graham

    2014-01-01

    Adverse drug reaction (ADR) risk-prediction models for use in older adults have been developed, but it is not clear if they are suitable for use in clinical practice. This systematic review aimed to identify and investigate the quality of validated ADR risk-prediction models for use in older adults. Standard computerized databases, the gray literature, bibliographies, and citations were searched (2012) to identify relevant peer-reviewed studies. Studies that developed and validated an ADR prediction model for use in patients over 65 years old, using a multivariable approach in the design and analysis, were included. Data were extracted and their quality assessed by independent reviewers using a standard approach. Of the 13,423 titles identified, only 549 were associated with adverse outcomes of medicines use. Four met the inclusion criteria. All were conducted in inpatient cohorts in Western Europe. None of the models satisfied the four key stages in the creation of a quality risk prediction model; development and validation were completed, but impact and implementation were not assessed. Model performance was modest; area under the receiver operator curve ranged from 0.623 to 0.73. Study quality was difficult to assess due to poor reporting, but inappropriate methods were apparent. Further work needs to be conducted concerning the existing models to enable the development of a robust ADR risk-prediction model that is externally validated, with practical design and good performance. Only then can implementation and impact be assessed with the aim of generating a model of high enough quality to be considered for use in clinical care to prioritize older people at high risk of suffering an ADR. PMID:25278750

  12. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  13. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future). PMID:11542672

  14. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  15. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  16. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  17. Embryo quality predictive models based on cumulus cells gene expression

    PubMed Central

    Burnik Papler, T; Verdenik, I; Fon Tacer, K; Vrtačnik Bokal, E

    2016-01-01

    Abstract Since the introduction of in vitro fertilization (IVF) in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC) have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR)] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC) for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice. PMID:27785402

  18. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  19. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  20. In silico Prediction of Aqueous Solubility: a Comparative Study of Local and Global Predictive Models.

    PubMed

    Raevsky, Oleg A; Polianczyk, Daniel E; Grigorev, Veniamin Yu; Raevskaja, Olga E; Dearden, John C

    2015-06-01

    32 Quantitative Structure-Property Relationship (QSPR) models were constructed for prediction of aqueous intrinsic solubility of liquid and crystalline chemicals. Data sets contained 1022 liquid and 2615 crystalline compounds. Multiple Linear Regression (MLR), Support Vector Machine (SVM) and Random Forest (RF) methods were used to construct global models, and k-nearest neighbour (kNN), Arithmetic Mean Property (AMP) and Local Regression Property (LoReP) were used to construct local models. A set of the best QSPR models was obtained: for liquid chemicals with RMSE (root mean square error) of prediction in the range 0.50-0.60 log unit; for crystalline chemicals 0.80-0.90 log unit. In the case of global models the large number of descriptors makes mechanistic interpretation difficult. The local models use only one or two descriptors, so that a medicinal chemist working with sets of structurally-related chemicals can readily estimate their solubility. However, construction of stable local models requires the presence of closely related neighbours for each chemical considered. It is probable that a consensus of global and local QSPR models will be the optimal approach for construction of stable predictive QSPR models with mechanistic interpretation.

  1. An Overview of the NASA Aviation Safety Program (AVSP) Systemwide Accident Prevention (SWAP) Human Performance Modeling (HPM) Element

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Goodman, Allen; Hooley, Becky L.

    2003-01-01

    An overview is provided of the Human Performance Modeling (HPM) element within the NASA Aviation Safety Program (AvSP). Two separate model development tracks for performance modeling of real-world aviation environments are described: the first focuses on the advancement of cognitive modeling tools for system design, while the second centers on a prescriptive engineering model of activity tracking for error detection and analysis. A progressive implementation strategy for both tracks is discussed in which increasingly more complex, safety-relevant applications are undertaken to extend the state-of-the-art, as well as to reveal potential human-system vulnerabilities in the aviation domain. Of particular interest is the ability to predict the precursors to error and to assess potential mitigation strategies associated with the operational use of future flight deck technologies.

  2. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  3. Predictive Modeling With Big Data: Is Bigger Really Better?

    PubMed

    Junqué de Fortuny, Enric; Martens, David; Provost, Foster

    2013-12-01

    With the increasingly widespread collection and processing of "big data," there is natural interest in using these data assets to improve decision making. One of the best understood ways to use data to improve decision making is via predictive analytics. An important, open question is: to what extent do larger data actually lead to better predictive models? In this article we empirically demonstrate that when predictive models are built from sparse, fine-grained data-such as data on low-level human behavior-we continue to see marginal increases in predictive performance even to very large scale. The empirical results are based on data drawn from nine different predictive modeling applications, from book reviews to banking transactions. This study provides a clear illustration that larger data indeed can be more valuable assets for predictive analytics. This implies that institutions with larger data assets-plus the skill to take advantage of them-potentially can obtain substantial competitive advantage over institutions without such access or skill. Moreover, the results suggest that it is worthwhile for companies with access to such fine-grained data, in the context of a key predictive task, to gather both more data instances and more possible data features. As an additional contribution, we introduce an implementation of the multivariate Bernoulli Naïve Bayes algorithm that can scale to massive, sparse data. PMID:27447254

  4. Predictive Modeling With Big Data: Is Bigger Really Better?

    PubMed

    Junqué de Fortuny, Enric; Martens, David; Provost, Foster

    2013-12-01

    With the increasingly widespread collection and processing of "big data," there is natural interest in using these data assets to improve decision making. One of the best understood ways to use data to improve decision making is via predictive analytics. An important, open question is: to what extent do larger data actually lead to better predictive models? In this article we empirically demonstrate that when predictive models are built from sparse, fine-grained data-such as data on low-level human behavior-we continue to see marginal increases in predictive performance even to very large scale. The empirical results are based on data drawn from nine different predictive modeling applications, from book reviews to banking transactions. This study provides a clear illustration that larger data indeed can be more valuable assets for predictive analytics. This implies that institutions with larger data assets-plus the skill to take advantage of them-potentially can obtain substantial competitive advantage over institutions without such access or skill. Moreover, the results suggest that it is worthwhile for companies with access to such fine-grained data, in the context of a key predictive task, to gather both more data instances and more possible data features. As an additional contribution, we introduce an implementation of the multivariate Bernoulli Naïve Bayes algorithm that can scale to massive, sparse data.

  5. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    PubMed

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  6. Predictions of Geospace Drivers By the Probability Distribution Function Model

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, C.; Ridley, A. J.

    2014-12-01

    Geospace drivers like the solar wind speed, interplanetary magnetic field (IMF), and solar irradiance have a strong influence on the density of the thermosphere and the near-Earth space environment. This has important consequences on the drag on satellites that are in low orbit and therefore on their position. One of the basic problems with space weather prediction is that these drivers can only be measured about one hour before they affect the environment. In order to allow for adequate planning for some members of the commercial, military, or civilian communities, reliable long-term space weather forecasts are needed. The study presents a model for predicting geospace drivers up to five days in advance. This model uses the same general technique to predict the solar wind speed, the three components of the IMF, and the solar irradiance F10.7. For instance, it uses Probability distribution functions (PDFs) to relate the current solar wind speed and slope to the future solar wind speed, as well as the solar wind speed to the solar wind speed one solar rotation in the future. The PDF Model has been compared to other models for predictions of the speed. It has been found that it is better than using the current solar wind speed (i.e., persistence), and better than the Wang-Sheeley-Arge Model for prediction horizons of 24 hours. Once the drivers are predicted, and the uncertainty on the drivers are specified, the density in the thermosphere can be derived using various models of the thermosphere, such as the Global Ionosphere Thermosphere Model. In addition, uncertainties on the densities can be estimated, based on ensembles of simulations. From the density and uncertainty predictions, satellite positions, as well as the uncertainty in those positions can be estimated. These can assist operators in determining the probability of collisions between objects in low Earth orbit.

  7. An empirical model for probabilistic decadal prediction: A global analysis

    NASA Astrophysics Data System (ADS)

    Suckling, Emma; Hawkins, Ed; Eden, Jonathan; van Oldenborgh, Geert Jan

    2016-04-01

    Empirical models, designed to predict land-based surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. Its performance is evaluated for surface air temperature over a set of historical hindcast experiments under a series of different prediction `modes'. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to ten years ahead in all of the prediction modes investigated. Small improvements in skill are found at all lead times when including future volcanic forcings in the hindcasts. It is also suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical model framework has been designed with enough flexibility to

  8. Predictive modeling of coral disease distribution within a reef system.

    PubMed

    Williams, Gareth J; Aeby, Greta S; Cowie, Rebecca O M; Davy, Simon K

    2010-01-01

    Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1) coral diseases show distinct associations with multiple environmental factors, 2) incorporating interactions (synergistic collinearities) among environmental variables is important when predicting coral disease spatial patterns, and 3) modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value) will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA), Porites tissue loss (PorTL), Porites trematodiasis (PorTrem), and Montipora white syndrome (MWS), and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT) within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response), led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to particular

  9. Predictive Modeling of Coral Disease Distribution within a Reef System

    PubMed Central

    Williams, Gareth J.; Aeby, Greta S.; Cowie, Rebecca O. M.; Davy, Simon K.

    2010-01-01

    Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1) coral diseases show distinct associations with multiple environmental factors, 2) incorporating interactions (synergistic collinearities) among environmental variables is important when predicting coral disease spatial patterns, and 3) modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value) will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA), Porites tissue loss (PorTL), Porites trematodiasis (PorTrem), and Montipora white syndrome (MWS), and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT) within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response), led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to particular

  10. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  11. COMPASS: A Framework for Automated Performance Modeling and Prediction

    SciTech Connect

    Lee, Seyong; Meredith, Jeremy S; Vetter, Jeffrey S

    2015-01-01

    Flexible, accurate performance predictions offer numerous benefits such as gaining insight into and optimizing applications and architectures. However, the development and evaluation of such performance predictions has been a major research challenge, due to the architectural complexities. To address this challenge, we have designed and implemented a prototype system, named COMPASS, for automated performance model generation and prediction. COMPASS generates a structured performance model from the target application's source code using automated static analysis, and then, it evaluates this model using various performance prediction techniques. As we demonstrate on several applications, the results of these predictions can be used for a variety of purposes, such as design space exploration, identifying performance tradeoffs for applications, and understanding sensitivities of important parameters. COMPASS can generate these predictions across several types of applications from traditional, sequential CPU applications to GPU-based, heterogeneous, parallel applications. Our empirical evaluation demonstrates a maximum overhead of 4%, flexibility to generate models for 9 applications, speed, ease of creation, and very low relative errors across a diverse set of architectures.

  12. Risk prediction models for hepatocellular carcinoma in different populations

    PubMed Central

    Ma, Xiao; Yang, Yang; Tu, Hong; Gao, Jing; Tan, Yu-Ting; Zheng, Jia-Li; Bray, Freddie; Xiang, Yong-Bing

    2016-01-01

    Hepatocellular carcinoma (HCC) is a malignant disease with limited therapeutic options due to its aggressive progression. It places heavy burden on most low and middle income countries to treat HCC patients. Nowadays accurate HCC risk predictions can help making decisions on the need for HCC surveillance and antiviral therapy. HCC risk prediction models based on major risk factors of HCC are useful and helpful in providing adequate surveillance strategies to individuals who have different risk levels. Several risk prediction models among cohorts of different populations for estimating HCC incidence have been presented recently by using simple, efficient, and ready-to-use parameters. Moreover, using predictive scoring systems to assess HCC development can provide suggestions to improve clinical and public health approaches, making them more cost-effective and effort-effective, for inducing personalized surveillance programs according to risk stratification. In this review, the features of risk prediction models of HCC across different populations were summarized, and the perspectives of HCC risk prediction models were discussed as well. PMID:27199512

  13. COMPARISONS OF SPATIAL PATTERNS OF WET DEPOSITION TO MODEL PREDICTIONS

    EPA Science Inventory

    The Community Multiscale Air Quality model, (CMAQ), is a "one-atmosphere" model, in that it uses a consistent set of chemical reactions and physical principles to predict concentrations of primary pollutants, photochemical smog, and fine aerosols, as well as wet and dry depositi...

  14. Katz model prediction of Caenorhabditis elegans mutagenesis on STS-42

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Badhwar, Gautam D.

    1992-01-01

    Response parameters that describe the production of recessive lethal mutations in C. elegans from ionizing radiation are obtained with the Katz track structure model. The authors used models of the space radiation environment and radiation transport to predict and discuss mutation rates for C. elegans on the IML-1 experiment aboard STS-42.

  15. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  16. Prediction horizon effects on stochastic modelling hints for neural networks

    SciTech Connect

    Drossu, R.; Obradovic, Z.

    1995-12-31

    The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.

  17. Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation

    NASA Astrophysics Data System (ADS)

    Peters, A.; Lantermann, U.; el Moctar, O.

    2015-12-01

    The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.

  18. Thermal barrier coating life-prediction model development

    NASA Technical Reports Server (NTRS)

    Strangman, T. E.; Neumann, J. F.; Liu, A.

    1987-01-01

    The primary objective of this program was to develop an operative thermal barrier coating (TBC) design model for life prediction. The objective was successfully accomplished with the development, calibration, and demonstration of a mechanistic thermochemical model which rapidly predicts TBC life as a function of engine, mission, and materials system parameters. This thermochemical design model accounts for the three operative TBC damage modes (bond coating oxidation, zirconia toughness reduction, and molten salt film damage), which all contribute to spalling of the insulating zirconia layer.

  19. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  20. Comparison of tropospheric scintillation prediction models of the Indonesian climate

    NASA Astrophysics Data System (ADS)

    Chen, Cheng Yee; Singh, Mandeep Jit

    2014-12-01

    Tropospheric scintillation is a phenomenon that will cause signal degradation in satellite communication with low fade margin. Few studies of scintillation have been conducted in tropical regions. To analyze tropospheric scintillation, we obtain data from a satellite link installed at Bandung, Indonesia, at an elevation angle of 64.7° and a frequency of 12.247 GHz from 1999 to 2000. The data are processed and compared with the predictions of several well-known scintillation prediction models. From the analysis, we found that the ITU-R model gives the lowest error rate when predicting the scintillation intensity for fade at 4.68%. However, the model should be further tested using data from higher-frequency bands, such as the K and Ka bands, to verify the accuracy of the model.

  1. Three-model ensemble wind prediction in southern Italy

    NASA Astrophysics Data System (ADS)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  2. An inverse modeling method to assess the source term of the Fukushima nuclear power plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-06-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially

  3. World commercial aircraft accidents

    SciTech Connect

    Kimura, C.Y.

    1993-01-01

    This report is a compilation of all accidents world-wide involving aircraft in commercial service which resulted in the loss of the airframe or one or more fatality, or both. This information has been gathered in order to present a complete inventory of commercial aircraft accidents. Events involving military action, sabotage, terrorist bombings, hijackings, suicides, and industrial ground accidents are included within this list. Included are: accidents involving world commercial jet aircraft, world commercial turboprop aircraft, world commercial pistonprop aircraft with four or more engines and world commercial pistonprop aircraft with two or three engines from 1946 to 1992. Each accident is presented with information in the following categories: date of the accident, airline and its flight numbers, type of flight, type of aircraft, aircraft registration number, construction number/manufacturers serial number, aircraft damage, accident flight phase, accident location, number of fatalities, number of occupants, cause, remarks, or description (brief) of the accident, and finally references used. The sixth chapter presents a summary of the world commercial aircraft accidents by major aircraft class (e.g. jet, turboprop, and pistonprop) and by flight phase. The seventh chapter presents several special studies including a list of world commercial aircraft accidents for all aircraft types with 100 or more fatalities in order of decreasing number of fatalities, a list of collision accidents involving commercial aircrafts, and a list of world commercial aircraft accidents for all aircraft types involving military action, sabotage, terrorist bombings, and hijackings.

  4. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  5. Efficient Modelling and Prediction of Meshing Noise from Chain Drives

    NASA Astrophysics Data System (ADS)

    ZHENG, H.; WANG, Y. Y.; LIU, G. R.; LAM, K. Y.; QUEK, K. P.; ITO, T.; NOGUCHI, Y.

    2001-08-01

    This paper presents a practical approach for predicting the meshing noise due to the impact of chain rollers against the sprocket of chain drives. An acoustical model relating dynamic response of rollers and its induced sound pressure is developed based on the fact that the acoustic field is mainly created by oscillating rigid cylindrical rollers. Finite element techniques and numerical software codes are employed to model and simulate the acceleration response of each chain roller which is necessary for noise level prediction of a chain drive under varying operation conditions and different sprocket configurations. The predicted acoustic pressure levels of meshing noise are compared with the available experimental measurements. It is shown that the predictions are in reasonable agreement with the experiments and the approach enables designers to obtain required information on the noise level of a selected chain drive in a time- and cost-efficient manner.

  6. Modeling system for predicting enterococci levels at Holly Beach.

    PubMed

    Zhang, Zaihong; Deng, Zhiqiang; Rusch, Kelly A; Walker, Nan D

    2015-08-01

    This paper presents a new modeling system for nowcasting and forecasting enterococci levels in coastal recreation waters at any time during the day. The modeling system consists of (1) an artificial neural network (ANN) model for predicting the enterococci level at sunrise time, (2) a clear-sky solar radiation and turbidity correction to the ANN model, (3) remote sensing algorithms for turbidity, and (4) nowcasting/forecasting data. The first three components are also unique features of the new modeling system. While the component (1) is useful to beach monitoring programs requiring enterococci levels in early morning, the component (2) in combination with the component (1) makes it possible to predict the bacterial level in beach waters at any time during the day if the data from the components (3) and (4) are available. Therefore, predictions from the component (2) are of primary interest to beachgoers. The modeling system was developed using three years of swimming season data and validated using additional four years of independent data. Testing results showed that (1) the sunrise-time model correctly reproduced 82.63% of the advisories issued in seven years with a false positive rate of 2.65% and a false negative rate of 14.72%, and (2) the new modeling system was capable of predicting the temporal variability in enterococci levels in beach waters, ranging from hourly changes to daily cycles. The results demonstrate the efficacy of the new modeling system in predicting enterococci levels in coastal beach waters. Applications of the modeling system will improve the management of recreational beaches and protection of public health.

  7. The determinants of fishing vessel accident severity.

    PubMed

    Jin, Di

    2014-05-01

    The study examines the determinants of fishing vessel accident severity in the Northeastern United States using vessel accident data from the U.S. Coast Guard for 2001-2008. Vessel damage and crew injury severity equations were estimated separately utilizing the ordered probit model. The results suggest that fishing vessel accident severity is significantly affected by several types of accidents. Vessel damage severity is positively associated with loss of stability, sinking, daytime wind speed, vessel age, and distance to shore. Vessel damage severity is negatively associated with vessel size and daytime sea level pressure. Crew injury severity is also positively related to the loss of vessel stability and sinking.

  8. The determinants of fishing vessel accident severity.

    PubMed

    Jin, Di

    2014-05-01

    The study examines the determinants of fishing vessel accident severity in the Northeastern United States using vessel accident data from the U.S. Coast Guard for 2001-2008. Vessel damage and crew injury severity equations were estimated separately utilizing the ordered probit model. The results suggest that fishing vessel accident severity is significantly affected by several types of accidents. Vessel damage severity is positively associated with loss of stability, sinking, daytime wind speed, vessel age, and distance to shore. Vessel damage severity is negatively associated with vessel size and daytime sea level pressure. Crew injury severity is also positively related to the loss of vessel stability and sinking. PMID:24473412

  9. Testable polarization predictions for models of CMB isotropy anomalies

    SciTech Connect

    Dvorkin, Cora; Peiris, Hiranya V.; Hu, Wayne

    2008-03-15

    Anomalies in the large-scale cosmic microwave background (CMB) temperature sky measured by the Wilkinson Microwave Anisotropy Probe have been suggested as possible evidence for a violation of statistical isotropy on large scales. In any physical model for broken isotropy, there are testable consequences for the CMB polarization field. We develop simulation tools for predicting the polarization field in models that break statistical isotropy locally through a modulation field. We study two different models: dipolar modulation, invoked to explain the asymmetry in power between northern and southern ecliptic hemispheres, and quadrupolar modulation, posited to explain the alignments between the quadrupole and octopole. For the dipolar case, we show that predictions for the correlation between the first 10 multipoles of the temperature and polarization fields can typically be tested at better than the 98% CL. For the quadrupolar case, we show that the polarization quadrupole and octopole should be moderately aligned. Such an alignment is a generic prediction of explanations which involve the temperature field at recombination and thus discriminate against explanations involving foregrounds or local secondary anisotropy. Predicted correlations between temperature and polarization multipoles out to l=5 provide tests at the {approx}99% CL or stronger for quadrupolar models that make the temperature alignment more than a few percent likely. As predictions of anomaly models, polarization statistics move beyond the a posteriori inferences that currently dominate the field.

  10. Transport and deposition of radionuclides after the Fukushima nuclear accident: international model inter-comparison in the framework of a WMO Task Team

    NASA Astrophysics Data System (ADS)

    Wotawa, Gerhard; Draxler, Roland; Arnold, Delia; Galmarini, Stefano; Hort, Matthew; Jones, Andrew; Leadbetter, Susan; Malo, Alain; Maurer, Christian; Rolph, Glenn; Saito, Kazuo; Servranckx, Rene; Shimbori, Toshiki; Solazzo, Efisio

    2013-04-01

    In the framework of a Task Team set up after the Fukushima accident sponsored by WMO, the atmospheric transport and deposition models (ATDMs) FLEXPART (Austria), HYSPLIT (U.S.), MLDP0 (Canada), NAME (UK) and RATM (Japan) were inter-compared. These models are well-known and widely used for emergency response activities. As alternative model input data, JMA made available a Meso-Analysis with 5 km/ 3 hour resolution, and a radar/rain gauge precipitation analysis with 1 km and 30 minutes resolution. To allow maximum flexibility regarding the release rates of key nuclides, the computations were based on the concept of source-receptor matrices, in this connection also called transfer coefficient matrices (TCM). The matrices are calculated every 3 hours after 11 March 2011 00 UTC, based on unit emissions, and thus can be overlaid with any present and future release scenario that becomes established. As computational species, the model considered tracers, depositing gases and depositing aerosols, allowing accounting for the range of substances emitted during a nuclear accident. The model comparison was based on observed deposition patterns of Cesium-137 in the Fukushima province as collected by MEXT/USDOE shortly after the accident, and a few available in situ stations measuring radioactive isotopes. To perform a statistical comparison, established parameters like correlation coefficient (r), fractional bias (FB) and figure of merit in space (FMS) were used. A further ensemble analysis was performed to determine what subset of model results out of all available would provide non-redundant information and thus is optimal to describe the transport and deposition during the accident. The investigation showed (i) that a TCM-based calculation approach has a lot of merits due to its flexibility, (ii) that models tended to perform better if they were run in improved resolution or directly with the Japanese Meso-analysis, (iii) that the model results depend on the selection of

  11. Towards a unified modeling system of predicting the transport of radionuclides in coastal sea regions

    NASA Astrophysics Data System (ADS)

    Jung, Kyung Tae; Brovchenko, Igor; Maderich, Vladimir; Kim, Kyeong Ok; Qiao, Fangli

    2016-04-01

    We present in this talk a recent progress in developing a unified modeling system of predicting three-dimensional transport of radionuclides coupled with multiple-scale circulation, wave and suspended sediment modules, keeping in mind the application to coastal sea regions with non-uniform distribution of suspended and bed sediments of both cohesive and non-cohesive types. The model calculates the concentration fields of dissolved and particulate radionuclides in bottom sediment as well as in water column. The transfer of radioactivity between the water column and the pore water in the upper layer of the bottom sediment is governed by diffusion processes. The phase change between dissolved and particulate radionuclides is written in terms of absorption/desorption rates and distribution coefficients. The dependence of distribution coefficients is inversely proportional to the sediment particle size. The hydrodynamic numerical model SELFE that solves equations for the multiple-scale circulation, the wave action and sand transport on the unstructured grids has been used as a base model. We have extended the non-cohesive sediment module of SELFE to the form applicable to mixture of cohesive and non-cohesive sedimentary regimes by implementing an extended form of erosional rate and a flocculation model for the determination of settling velocity of cohesive flocs. Issues related to the calibration of the sediment transport model in the Yellow Sea are described. The radionuclide transport model with one-step transfer kinetics and single bed layer has been initially developed and then applied to Fukushima Daiichi nuclear accident. The model has been in this study verified through the comparison with measurements of 137Cs concentration in bed sediments. Preliminary application to the Yellow and East China Seas with a hypothetical release scenario are described. On-going development of the radionuclide transport model using two-step transfer kinetics and multiple bed layers

  12. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advantages for the standard genomic BLUP model, were reported under small heritabilities and shallow pedigrees, although precision of the genomic breeding values did not differ across models at a significant level. This method allows for slightly more accurate genomic predictions and handling of newly created variation; moreover, this approach does not require additional genotyping or phenotyping efforts, but a more accurate handing of available data. PMID:23746579

  13. Short communication: Accounting for new mutations in genomic prediction models.

    PubMed

    Casellas, Joaquim; Esquivelzeta, Cecilia; Legarra, Andrés

    2013-08-01

    Genomic evaluation models so far do not allow for accounting of newly generated genetic variation due to mutation. The main target of this research was to extend current genomic BLUP models with mutational relationships (model AM), and compare them against standard genomic BLUP models (model A) by analyzing simulated data. Model performance and precision of the predicted breeding values were evaluated under different population structures and heritabilities. The deviance information criterion (DIC) clearly favored the mutational relationship model under large heritabilities or populations with moderate-to-deep pedigrees contributing phenotypic data (i.e., differences equal or larger than 10 DIC units); this model provided slightly higher correlation coefficients between simulated and predicted genomic breeding values. On the other hand, null DIC differences, or even relevant advan